WorldWideScience

Sample records for method simulation results

  1. Comparison Of Simulation Results When Using Two Different Methods For Mold Creation In Moldflow Simulation

    Directory of Open Access Journals (Sweden)

    Kaushikbhai C. Parmar

    2017-04-01

    Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.

  2. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  3. A method for data handling numerical results in parallel OpenFOAM simulations

    International Nuclear Information System (INIS)

    nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian

    2015-01-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms

  4. A method for data handling numerical results in parallel OpenFOAM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  5. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  6. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  7. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  8. Summarizing Simulation Results using Causally-relevant States

    Science.gov (United States)

    Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth

    2016-01-01

    As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620

  9. 2-d Simulations of Test Methods

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm

    2004-01-01

    One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...

  10. Collaborative simulation method with spatiotemporal synchronization process control

    Science.gov (United States)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  11. Presenting simulation results in a nested loop plot.

    Science.gov (United States)

    Rücker, Gerta; Schwarzer, Guido

    2014-12-12

    Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.

  12. Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method

    Directory of Open Access Journals (Sweden)

    Francesco Pellicani

    2016-05-01

    Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.

  13. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  14. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  15. Reconstructing the ideal results of a perturbed analog quantum simulator

    Science.gov (United States)

    Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael

    2018-04-01

    Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.

  16. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  17. Methods for simulating turbulent phase screen

    International Nuclear Information System (INIS)

    Zhang Jianzhu; Zhang Feizhou; Wu Yi

    2012-01-01

    Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)

  18. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  19. Factorization method for simulating QCD at finite density

    International Nuclear Information System (INIS)

    Nishimura, Jun

    2003-01-01

    We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)

  20. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua

    2012-01-01

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  1. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  2. Simulation of tunneling construction methods of the Cisumdawu toll road

    Science.gov (United States)

    Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.

    2017-11-01

    Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.

  3. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  4. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  5. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  6. GEM simulation methods development

    International Nuclear Information System (INIS)

    Tikhonov, V.; Veenhof, R.

    2002-01-01

    A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment

  7. Method for numerical simulation of two-term exponentially correlated colored noise

    International Nuclear Information System (INIS)

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-01-01

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications

  8. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  9. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  10. Detector Simulation: Data Treatment and Analysis Methods

    CERN Document Server

    Apostolakis, J

    2011-01-01

    Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...

  11. A Ten-Step Design Method for Simulation Games in Logistics Management

    NARCIS (Netherlands)

    Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.

    2011-01-01

    Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a

  12. Development of new methods for the modeling of technical systems and result evaluation for reactor safety simulation codes. Modeling, simulation models; Entwicklung neuer Methoden zur Modellierung technischer Systeme und zur Ergebnisauswertung fuer Simulationsprogramme der Reaktorsicherheit. Modellierung, Simulationsprogramme

    Energy Technology Data Exchange (ETDEWEB)

    Cester, Francesco; Deitenbeck, Helmuth; Kuentzel, Matthias; Scheuer, Josef; Voggenberger, Thomas

    2015-04-15

    The overall objective of the project is to develop a general simulation environment for program systems used in reactor safety analysis. The simulation environment provides methods for graphical modeling and evaluation of results for the simulation models. The terms of graphical modeling and evaluation of results summarize computerized methods of pre- and postprocessing for the simulation models, which can assist the user in the execution of the simulation steps. The methods comprise CAD (''Computer Aided Design'') based input tools, interactive user interfaces for the execution of the simulation and the graphical representation and visualization of the simulation results. A particular focus was set on the requirements of the system code ATHLET. A CAD tool was developed that allows the specification of 3D geometry of the plant components and the discretization with a simulation grid. The system provides inter-faces to generate the input data of the codes and to export the data for the visualization software. The CAD system was applied for the modeling of a cooling circuit and reactor pressure vessel of a PWR. For the modeling of complex systems with many components, a general purpose graphical network editor was adapted and expanded. The editor is able to simulate networks with complex topology graphically by suitable building blocks. The network editor has been enhanced and adapted to the modeling of balance of plant and thermal fluid systems in ATHLET. For the visual display of the simulation results in the local context of the 3D geometry and the simulation grid, the open source program ParaView is applied, which is widely used for 3D visualization of field data, offering multiple options for displaying and ana-lyzing the data. New methods were developed, that allow the necessary conversion of the results of the reactor safety codes and the data of the CAD models. The trans-formed data may then be imported into ParaView and visualized. The

  13. Solar panel thermal cycling testing by solar simulation and infrared radiation methods

    Science.gov (United States)

    Nuss, H. E.

    1980-01-01

    For the solar panels of the European Space Agency (ESA) satellites OTS/MAROTS and ECS/MARECS the thermal cycling tests were performed by using solar simulation methods. The performance data of two different solar simulators used and the thermal test results are described. The solar simulation thermal cycling tests for the ECS/MARECS solar panels were carried out with the aid of a rotatable multipanel test rig by which simultaneous testing of three solar panels was possible. As an alternative thermal test method, the capability of an infrared radiation method was studied and infrared simulation tests for the ultralight panel and the INTELSAT 5 solar panels were performed. The setup and the characteristics of the infrared radiation unit using a quartz lamp array of approx. 15 sq and LN2-cooled shutter and the thermal test results are presented. The irradiation uniformity, the solar panel temperature distribution, temperature changing rates for both test methods are compared. Results indicate the infrared simulation is an effective solar panel thermal testing method.

  14. A Monte Carlo method and finite volume method coupled optical simulation method for parabolic trough solar collectors

    International Nuclear Information System (INIS)

    Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing

    2017-01-01

    Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of

  15. Method of simulating dose reduction for digital radiographic systems

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.

    2005-01-01

    The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)

  16. New methods in plasma simulation

    International Nuclear Information System (INIS)

    Mason, R.J.

    1990-01-01

    The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs

  17. A nondissipative simulation method for the drift kinetic equation

    International Nuclear Information System (INIS)

    Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya

    2001-07-01

    With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)

  18. Adaptive implicit method for thermal compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)

    2008-10-15

    As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.

  19. Nonequilibrium relaxation method – An alternative simulation strategy

    Indian Academy of Sciences (India)

    One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...

  20. Activity coefficients from molecular simulations using the OPAS method

    Science.gov (United States)

    Kohns, Maximilian; Horsch, Martin; Hasse, Hans

    2017-10-01

    A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

  1. Study on simulation methods of atrium building cooling load in hot and humid regions

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)

    2010-10-15

    In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)

  2. Discrete Particle Method for Simulating Hypervelocity Impact Phenomena

    Directory of Open Access Journals (Sweden)

    Erkai Watson

    2017-04-01

    Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.

  3. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  4. Contribution of the ultrasonic simulation to the testing methods qualification process

    International Nuclear Information System (INIS)

    Le Ber, L.; Calmon, P.; Abittan, E.

    2001-01-01

    The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)

  5. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  6. Simulations of Micro Gas Flows by the DS-BGK Method

    KAUST Repository

    Li, Jun

    2011-01-01

    For gas flows in micro devices, the molecular mean free path is of the same order as the characteristic scale making the Navier-Stokes equation invalid. Recently, some micro gas flows are simulated by the DS-BGK method, which is convergent to the BGK equation and very efficient for low-velocity cases. As the molecular reflection on the boundary is the dominant effect compared to the intermolecular collisions in micro gas flows, the more realistic boundary condition, namely the CLL reflection model, is employed in the DS-BGK simulation and the influence of the accommodation coefficients used in the molecular reflection model on the results are discussed. The simulation results are verified by comparison with those of the DSMC method as criteria. Copyright © 2011 by ASME.

  7. Numerical simulation of electromagnetic wave propagation using time domain meshless method

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Fujita, Yoshihisa; Itoh, Taku; Nakata, Susumu; Nakamura, Hiroaki; Kamitani, Atsushi

    2012-01-01

    The electromagnetic wave propagation in various shaped wave guide is simulated by using meshless time domain method (MTDM). Generally, Finite Differential Time Domain (FDTD) method is applied for electromagnetic wave propagation simulation. However, the numerical domain should be divided into rectangle meshes if FDTD method is applied for the simulation. On the other hand, the node disposition of MTDM can easily describe the structure of arbitrary shaped wave guide. This is the large advantage of the meshless time domain method. The results of computations show that the damping rate is stably calculated in case with R < 0.03, where R denotes a support radius of the weight function for the shape function. And the results indicate that the support radius R of the weight functions should be selected small, and monomials must be used for calculating the shape functions. (author)

  8. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  9. Towards numerical simulations of supersonic liquid jets using ghost fluid method

    International Nuclear Information System (INIS)

    Majidi, Sahand; Afshari, Asghar

    2015-01-01

    Highlights: • A ghost fluid method based solver is developed for numerical simulation of compressible multiphase flows. • The performance of the numerical tool is validated via several benchmark problems. • Emergence of supersonic liquid jets in quiescent gaseous environment is simulated using ghost fluid method for the first time. • Bow-shock formation ahead of the liquid jet is clearly observed in the obtained numerical results. • Radiation of mach waves from the phase-interface witnessed experimentally is evidently captured in our numerical simulations. - Abstract: A computational tool based on the ghost fluid method (GFM) is developed to study supersonic liquid jets involving strong shocks and contact discontinuities with high density ratios. The solver utilizes constrained reinitialization method and is capable of switching between the exact and approximate Riemann solvers to increase the robustness. The numerical methodology is validated through several benchmark test problems; these include one-dimensional multiphase shock tube problem, shock–bubble interaction, air cavity collapse in water, and underwater-explosion. A comparison between our results and numerical and experimental observations indicate that the developed solver performs well investigating these problems. The code is then used to simulate the emergence of a supersonic liquid jet into a quiescent gaseous medium, which is the very first time to be studied by a ghost fluid method. The results of simulations are in good agreement with the experimental investigations. Also some of the famous flow characteristics, like the propagation of pressure-waves from the liquid jet interface and dependence of the Mach cone structure on the inlet Mach number, are reproduced numerically. The numerical simulations conducted here suggest that the ghost fluid method is an affordable and reliable scheme to study complicated interfacial evolutions in complex multiphase systems such as supersonic liquid

  10. Motion simulation of hydraulic driven safety rod using FSI method

    International Nuclear Information System (INIS)

    Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In

    2013-01-01

    Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results

  11. Performance evaluation of sea surface simulation methods for target detection

    Science.gov (United States)

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  12. Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations

    DEFF Research Database (Denmark)

    Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht

    2011-01-01

    Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... region method reduces the computation time mainly by skipping stability analysis for a large portion of compositions in the single phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be employed with initial estimates from the previous step. The CSAT method saves...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...

  13. A calculation method for RF couplers design based on numerical simulation by microwave studio

    International Nuclear Information System (INIS)

    Wang Rong; Pei Yuanji; Jin Kai

    2006-01-01

    A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

  14. A simulation method for lightning surge response of switching power

    International Nuclear Information System (INIS)

    Wei, Ming; Chen, Xiang

    2013-01-01

    In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.

  15. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  16. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  17. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  18. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  19. New method of fast simulation for a hadron calorimeter response

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.

    2003-01-01

    In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data

  20. Comparing three methods for participatory simulation of hospital work systems

    DEFF Research Database (Denmark)

    Broberg, Ole; Andersen, Simone Nyholm

    Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...

  1. Simulation methods with extended stability for stiff biochemical Kinetics

    Directory of Open Access Journals (Sweden)

    Rué Pau

    2010-08-01

    Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.

  2. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz

    2018-01-01

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  3. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid

    2018-05-17

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  4. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    Brochi, M.A.C.

    1990-01-01

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  5. A New Method to Simulate Free Surface Flows for Viscoelastic Fluid

    Directory of Open Access Journals (Sweden)

    Yu Cao

    2015-01-01

    Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.

  6. Verification of results of core physics on-line simulation by NGFM code

    International Nuclear Information System (INIS)

    Zhao Yu; Cao Xinrong; Zhao Qiang

    2008-01-01

    Nodal Green's Function Method program NGFM/TNGFM has been trans- planted to windows system. The 2-D and 3-D benchmarks have been checked by this program. And the program has been used to check the results of QINSHAN-II reactor simulation. It is proved that the NGFM/TNGFM program is applicable for reactor core physics on-line simulation system. (authors)

  7. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  8. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  9. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  10. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  11. A particle finite element method for machining simulations

    Science.gov (United States)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  12. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    Energy Technology Data Exchange (ETDEWEB)

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra [Theoretical High Energy Physics and Instrumentation Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Wahyoedi, Seramika Ari; Viridi, Sparisoma, E-mail: viridi@cphys.fi.itb.ac.id [Nuclear and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  13. An improved method for simulating radiographs

    International Nuclear Information System (INIS)

    Laguna, G.W.

    1986-01-01

    The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials

  14. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  15. Simulation of Jetting in Injection Molding Using a Finite Volume Method

    Directory of Open Access Journals (Sweden)

    Shaozhen Hua

    2016-05-01

    Full Text Available In order to predict the jetting and the subsequent buckling flow more accurately, a three dimensional melt flow model was established on a viscous, incompressible, and non-isothermal fluid, and a control volume-based finite volume method was employed to discretize the governing equations. A two-fold iterative method was proposed to decouple the dependence among pressure, velocity, and temperature so as to reduce the computation and improve the numerical stability. Based on the proposed theoretical model and numerical method, a program code was developed to simulate melt front progress and flow fields. The numerical simulations for different injection speeds, melt temperatures, and gate locations were carried out to explore the jetting mechanism. The results indicate the filling pattern depends on the competition between inertial and viscous forces. When inertial force exceeds the viscous force jetting occurs, then it changes to a buckling flow as the viscous force competes over the inertial force. Once the melt contacts with the mold wall, the melt filling switches to conventional sequential filling mode. Numerical results also indicate jetting length increases with injection speed but changes little with melt temperature. The reasonable agreements between simulated and experimental jetting length and buckling frequency imply the proposed method is valid for jetting simulation.

  16. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    Science.gov (United States)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  17. Separation of electron ion ring components (computational simulation and experimental results)

    International Nuclear Information System (INIS)

    Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.

    1978-01-01

    The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented

  18. Some recent developments of the immersed interface method for flow simulation

    Science.gov (United States)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  19. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  20. 3D simulation of friction stir welding based on movable cellular automaton method

    Science.gov (United States)

    Eremina, Galina M.

    2017-12-01

    The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.

  1. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  2. Evaluation of full-scope simulator testing methods

    International Nuclear Information System (INIS)

    Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs

  3. Evaluation of full-scope simulator testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.

  4. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  5. Simulation of Intra-Aneurysmal Blood Flow by Different Numerical Methods

    Directory of Open Access Journals (Sweden)

    Frank Weichert

    2013-01-01

    Full Text Available The occlusional performance of sole endoluminal stenting of intracranial aneurysms is controversially discussed in the literature. Simulation of blood flow has been studied to shed light on possible causal attributions. The outcome, however, largely depends on the numerical method and various free parameters. The present study is therefore conducted to find ways to define parameters and efficiently explore the huge parameter space with finite element methods (FEMs and lattice Boltzmann methods (LBMs. The goal is to identify both the impact of different parameters on the results of computational fluid dynamics (CFD and their advantages and disadvantages. CFD is applied to assess flow and aneurysmal vorticity in 2D and 3D models. To assess and compare initial simulation results, simplified 2D and 3D models based on key features of real geometries and medical expert knowledge were used. A result obtained from this analysis indicates that a combined use of the different numerical methods, LBM for fast exploration and FEM for a more in-depth look, may result in a better understanding of blood flow and may also lead to more accurate information about factors that influence conditions for stenting of intracranial aneurysms.

  6. STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD

    OpenAIRE

    塩澤, 孝哉

    2015-01-01

    In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...

  7. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  8. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  9. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  10. Electron-cloud simulation results for the SPS and recent results for the LHC

    International Nuclear Information System (INIS)

    Furman, M.A.; Pivi, M.T.F.

    2002-01-01

    We present an update of computer simulation results for some features of the electron cloud at the Large Hadron Collider (LHC) and recent simulation results for the Super Proton Synchrotron (SPS). We focus on the sensitivity of the power deposition on the LHC beam screen to the emitted electron spectrum, which we study by means of a refined secondary electron (SE) emission model recently included in our simulation code

  11. Extended post processing for simulation results of FEM synthesized UHF-RFID transponder antennas

    Directory of Open Access Journals (Sweden)

    R. Herschmann

    2007-06-01

    Full Text Available The computer aided design process of sophisticated UHF-RFID transponder antennas requires the application of reliable simulation software. This paper describes a Matlab implemented extension of the post processor capabilities of the commercially available three dimensional field simulation programme Ansoft HFSS to compute an accurate solution of the antenna's surface current distribution. The accuracy of the simulated surface currents, which are physically related to the impedance at the feeding point of the antenna, depends on the convergence of the electromagnetic fields inside the simulation volume. The introduced method estimates the overall quality of the simulation results by combining the surface currents with the electromagnetic fields extracted from the field solution of Ansoft HFSS.

  12. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  13. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  14. Integrated visualization of simulation results and experimental devices in virtual-reality space

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi

    2011-01-01

    We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)

  15. Concentration gradient driven molecular dynamics: a new method for simulations of membrane permeation and separation† †Electronic supplementary information (ESI) available: Additional simulation settings, results and snapshots. See DOI: 10.1039/c6sc04978h Click here for additional data file.

    Science.gov (United States)

    Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele

    2017-01-01

    In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results. PMID:28966778

  16. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    Science.gov (United States)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  17. Simulation of a Centrifugal Pump by Using the Harmonic Balance Method

    Directory of Open Access Journals (Sweden)

    Franco Magagnato

    2015-01-01

    Full Text Available The harmonic balance method was used for the flow simulation in a centrifugal pump. Independence studies have been done to choose proper number of harmonic modes and inlet eddy viscosity ratio value. The results from harmonic balance method show good agreements with PIV experiments and unsteady calculation results (which is based on the dual time stepping method for the predicted head and the phase-averaged velocity. A detailed analysis of the flow fields at different flow rates shows that the flow rate has an evident influence on the flow fields. At 0.6Qd, some vortices begin to appear in the impeller, and at 0.4Qd some vortices have blocked the flow passage. The flow fields at different positions at 0.6Qd and 0.4Qd show how the complicated flow phenomena are forming, developing, and even disappearing. The harmonic balance method can be used for the flow simulation in pumps, showing the same accuracy as unsteady methods, but is considerably faster.

  18. A Modified SPH Method for Dynamic Failure Simulation of Heterogeneous Material

    Directory of Open Access Journals (Sweden)

    G. W. Ma

    2014-01-01

    Full Text Available A modified smoothed particle hydrodynamics (SPH method is applied to simulate the failure process of heterogeneous materials. An elastoplastic damage model based on an extension form of the unified twin shear strength (UTSS criterion is adopted. Polycrystalline modeling is introduced to generate the artificial microstructure of specimen for the dynamic simulation of Brazilian splitting test and uniaxial compression test. The strain rate effect on the predicted dynamic tensile and compressive strength is discussed. The final failure patterns and the dynamic strength increments demonstrate good agreements with experimental results. It is illustrated that the polycrystalline modeling approach combined with the SPH method is promising to simulate more complex failure process of heterogeneous materials.

  19. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  20. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C. [Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, 400 Trabalhador São-Carlense Avenue, São Carlos 13566-590 (Brazil); Bakic, Predrag R.; Maidment, Andrew D. A. [Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, 3400 Spruce Street, Philadelphia, Pennsylvania 19104 (United States)

    2016-06-15

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  1. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    International Nuclear Information System (INIS)

    Borges, Lucas R.; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C.; Bakic, Predrag R.; Maidment, Andrew D. A.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  2. Multiscale Lattice Boltzmann method for flow simulations in highly heterogenous porous media

    KAUST Repository

    Li, Jun

    2013-01-01

    A lattice Boltzmann method (LBM) for flow simulations in highly heterogeneous porous media at both pore and Darcy scales is proposed in the paper. In the pore scale simulations, flow of two phases (e.g., oil and gas) or two immiscible fluids (e.g., water and oil) are modeled using cohesive or repulsive forces, respectively. The relative permeability can be computed using pore-scale simulations and seamlessly applied for intermediate and Darcy-scale simulations. A multiscale LBM that can reduce the computational complexity of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with the averaged results obtained using fine grid.

  3. LOMEGA: a low frequency, field implicit method for plasma simulation

    International Nuclear Information System (INIS)

    Barnes, D.C.; Kamimura, T.

    1982-04-01

    Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)

  4. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Energy Technology Data Exchange (ETDEWEB)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.

  5. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    International Nuclear Information System (INIS)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets

  6. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Science.gov (United States)

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  7. Flow simulation of a Pelton bucket using finite volume particle method

    International Nuclear Information System (INIS)

    Vessaz, C; Jahanbakhsh, E; Avellan, F

    2014-01-01

    The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets

  8. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    Science.gov (United States)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  9. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  10. New method of processing heat treatment experiments with numerical simulation support

    Science.gov (United States)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  11. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  12. Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.

    Science.gov (United States)

    Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd

    2018-02-01

    There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.

  13. Particle-transport simulation with the Monte Carlo method

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.

    1975-01-01

    Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)

  14. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Science.gov (United States)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  15. Evaluation of an improved method of simulating lung nodules in chest tomosynthesis

    International Nuclear Information System (INIS)

    Svalkvist, Angelica; Allansdotter Johnsson, Aase; Vikgren, Jenny

    2012-01-01

    Background Simulated pathology is a valuable complement to clinical images in studies aiming at evaluating an imaging technique. In order for a study using simulated pathology to be valid, it is important that the simulated pathology in a realistic way reflect the characteristics of real pathology. Purpose To perform a thorough evaluation of a nodule simulation method for chest tomosynthesis, comparing the detection rate and appearance of the artificial nodules with those of real nodules in an observer performance experiment. Material and Methods A cohort consisting of 64 patients, 38 patients with a total of 129 identified pulmonary nodules and 26 patients without identified pulmonary nodules, was used in the study. Simulated nodules, matching the real clinically found pulmonary nodules by size, attenuation, and location, were created and randomly inserted into the tomosynthesis section images of the patients. Three thoracic radiologists and one radiology resident reviewed the images in an observer performance study divided into two parts. The first part included nodule detection and the second part included rating of the visual appearance of the nodules. The results were evaluated using a modified receiver-operating characteristic (ROC) analysis. Results The sensitivities for real and simulated nodules were comparable, as the area under the modified ROC curve (AUC) was close to 0.5 for all observers (range, 0.43-0.55). Even though the ratings of visual appearance for real and simulated nodules overlapped considerably, the statistical analysis revealed that the observers to were able to separate simulated nodules from real nodules (AUC values range 0.70-0.74). Conclusion The simulation method can be used to create artificial lung nodules that have similar detectability as real nodules in chest tomosynthesis, although experienced thoracic radiologists may be able to distinguish them from real nodules

  16. Natural tracer test simulation by stochastic particle tracking method

    International Nuclear Information System (INIS)

    Ackerer, P.; Mose, R.; Semra, K.

    1990-01-01

    Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)

  17. Simulation teaching method in Engineering Optics

    Science.gov (United States)

    Lu, Qieni; Wang, Yi; Li, Hongbin

    2017-08-01

    We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.

  18. Recent simulation results of the magnetic induction tomography forward problem

    Directory of Open Access Journals (Sweden)

    Stawicki Krzysztof

    2016-06-01

    Full Text Available In this paper we present the results of simulations of the Magnetic Induction Tomography (MIT forward problem. Two complementary calculation techniques have been implemented and coupled, namely: the finite element method (applied in commercial software Comsol Multiphysics and the second, algebraic manipulations on basic relationships of electromagnetism in Matlab. The developed combination saves a lot of time and makes a better use of the available computer resources.

  19. Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement

    Directory of Open Access Journals (Sweden)

    Yanzheng Liu

    2014-03-01

    Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.

  20. Quantitative evaluation for training results of nuclear plant operator on BWR simulator

    International Nuclear Information System (INIS)

    Sato, Takao; Sato, Tatsuaki; Onishi, Hiroshi; Miyakita, Kohji; Mizuno, Toshiyuki

    1985-01-01

    Recently, the reliability of neclear power plants has largely risen, and the abnormal phenomena in the actual plants are rarely encountered. Therefore, the training using simulators becomes more and more important. In BWR Operator Training Center Corp., the training of the operators of BWR power plants has been continued for about ten years using a simulator having the nearly same function as the actual plants. The recent high capacity ratio of nuclear power plants has been mostly supported by excellent operators trained in this way. Taking the opportunity of the start of operation of No.2 simulator, effort has been exerted to quantitatively grasp the effect of training and to heighten the quality of training. The outline of seven training courses is shown. The technical ability required for operators, the items of quantifying the effect of training, that is, operational errors and the time required for operation, the method of quantifying, the method of collecting the data and the results of the application to the actual training are described. It was found that this method is suitable to quantify the effect of training. (Kako, I.)

  1. A method of simulating intensity modulation-direct detection WDM systems

    Institute of Scientific and Technical Information of China (English)

    HUANG Jing; YAO Jian-quan; LI En-bang

    2005-01-01

    In the simulation of Intensity Modulation-Direct Detection WDM Systems,when the dispersion and nonlinear effects play equally important roles,the intensity fluctuation caused by cross-phase modulation may be overestimated as a result of the improper step size.Therefore,the step size in numerical simulation should be selected to suppress false XPM intensity modulation (keep it much less than signal power).According to this criterion,the step size is variable along the fiber.For a WDM system,the step size depends on the channel separation.Different type of transmission fiber has different step size.In the split-step Fourier method,this criterion can reduce simulation time,and when the step size is bigger than 100 meters,the simulation accuracy can also be improved.

  2. Growth Kinetics of the Homogeneously Nucleated Water Droplets: Simulation Results

    International Nuclear Information System (INIS)

    Mokshin, Anatolii V; Galimzyanov, Bulat N

    2012-01-01

    The growth of homogeneously nucleated droplets in water vapor at the fixed temperatures T = 273, 283, 293, 303, 313, 323, 333, 343, 353, 363 and 373 K (the pressure p = 1 atm.) is investigated on the basis of the coarse-grained molecular dynamics simulation data with the mW-model. The treatment of simulation results is performed by means of the statistical method within the mean-first-passage-time approach, where the reaction coordinate is associated with the largest droplet size. It is found that the water droplet growth is characterized by the next features: (i) the rescaled growth law is unified at all the considered temperatures and (ii) the droplet growth evolves with acceleration and follows the power law.

  3. Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....

  4. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  5. Simulation of 3D parachute fluid–structure interaction based on nonlinear finite element method and preconditioning finite volume method

    Directory of Open Access Journals (Sweden)

    Fan Yuxin

    2014-12-01

    Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.

  6. Two-dimensional simulation of broad-band ferrite electromagnetic wave absorbers by using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hyun Jin; Kim, Dong Il [Korea Maritime University, Busan (Korea, Republic of)

    2004-10-15

    The purpose of this simulation study is to design and fabricate an electromagnetic (EM) wave absorber in order to develop a wide-band absorber. We have proposed and modeled a bird-eye-type and cutting-cone-type EM wave absorber by using the equivalent material constants method (EMCM), and we simulated them by using a finite-difference time-domain (FDTD) method. A two or a three-dimensional simulation would be desirable to analyze the EM wave absorber characteristics and to develop new structures. The two-dimensional FDTD simulation requires less computer resources than a three-dimensional simulation to consider the structural effects of the EM wave absorbers. The numerical simulation by using the FDTD method shows propagating EM waves in various types of periodic structure EM wave absorbers. Simultaneously, a Fourier analysis is used to characterize the input pulse and the reflected EM waves for ferrite absorbers with various structures. The results have a wide-band reflection-reducing characteristic. The validity of the proposed model was confirmed by comparing the two-dimensional simulation with the experimental results. The simulations were carried out in the frequency band from 30 MHz to 10 GHz.

  7. Two-dimensional simulation of broad-band ferrite electromagnetic wave absorbers by using the FDTD method

    International Nuclear Information System (INIS)

    Yoon, Hyun Jin; Kim, Dong Il

    2004-01-01

    The purpose of this simulation study is to design and fabricate an electromagnetic (EM) wave absorber in order to develop a wide-band absorber. We have proposed and modeled a bird-eye-type and cutting-cone-type EM wave absorber by using the equivalent material constants method (EMCM), and we simulated them by using a finite-difference time-domain (FDTD) method. A two or a three-dimensional simulation would be desirable to analyze the EM wave absorber characteristics and to develop new structures. The two-dimensional FDTD simulation requires less computer resources than a three-dimensional simulation to consider the structural effects of the EM wave absorbers. The numerical simulation by using the FDTD method shows propagating EM waves in various types of periodic structure EM wave absorbers. Simultaneously, a Fourier analysis is used to characterize the input pulse and the reflected EM waves for ferrite absorbers with various structures. The results have a wide-band reflection-reducing characteristic. The validity of the proposed model was confirmed by comparing the two-dimensional simulation with the experimental results. The simulations were carried out in the frequency band from 30 MHz to 10 GHz.

  8. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    Science.gov (United States)

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  9. Spectral Methods in Numerical Plasma Simulation

    DEFF Research Database (Denmark)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...

  10. Simulation testing the robustness of stock assessment models to error: some results from the ICES strategic initiative on stock assessment methods

    DEFF Research Database (Denmark)

    Deroba, J. J.; Butterworth, D. S.; Methot, R. D.

    2015-01-01

    The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments to dat...

  11. Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods

    Directory of Open Access Journals (Sweden)

    L. Brancik

    2011-04-01

    Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.

  12. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  13. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  14. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  15. Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method

    International Nuclear Information System (INIS)

    Zhang Xu; Tan Duowang

    2009-01-01

    A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)

  16. Actual interaction effects between policy measures for energy efficiency-A qualitative matrix method and quantitative simulation results for households

    International Nuclear Information System (INIS)

    Boonekamp, Piet G.M.

    2006-01-01

    Starting from the conditions for a successful implementation of saving options, a general framework was developed to investigate possible interaction effects in sets of energy policy measures. Interaction regards the influence of one measure on the energy saving effect of another measure. The method delivers a matrix for all combinations of measures, with each cell containing qualitative information on the strength and type of interaction: overlapping, reinforcing, or independent of each other. Results are presented for the set of policy measures on household energy efficiency in the Netherlands for 1990-2003. The second part regards a quantitative analysis of the interaction effects between three major measures: a regulatory energy tax, investment subsidies and regulation of gas use for space heating. Using a detailed bottom-up model, household energy use in the period 1990-2000 was simulated with and without these measures. The results indicate that combinations of two or three policy measures yield 13-30% less effect than the sum of the effects of the separate measures

  17. Analysis of Monte Carlo methods for the simulation of photon transport

    International Nuclear Information System (INIS)

    Carlsson, G.A.; Kusoffsky, L.

    1975-01-01

    In connection with the transport of low-energy photons (30 - 140 keV) through layers of water of different thicknesses, various aspects of Monte Carlo methods are examined in order to improve their effectivity (to produce statistically more reliable results with shorter computer times) and to bridge the gap between more physical methods and more mathematical ones. The calculations are compared with results of experiments involving the simulation of photon transport, using direct methods and collision density ones (J.S.)

  18. Relative solvation free energies calculated using an ab initio QM/MM-based free energy perturbation method: dependence of results on simulation length.

    Science.gov (United States)

    Reddy, M Rami; Erion, Mark D

    2009-12-01

    Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.

  19. Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method

    Directory of Open Access Journals (Sweden)

    Ghanmi Helmi

    2018-05-01

    Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.

  20. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  1. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke; Yokota, Rio; Koyama, Hiroshi; Morimoto, Gentaro; Hasegawa, Aki; Masumoto, Gen; Okimoto, Noriaki; Hirano, Yoshinori; Ibeid, Huda; Narumi, Tetsu; Taiji, Makoto

    2014-01-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  2. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke

    2014-10-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  3. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    Science.gov (United States)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  4. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  5. Quantum control with NMR methods: Application to quantum simulations

    International Nuclear Information System (INIS)

    Negrevergne, Camille

    2002-01-01

    Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr

  6. Efficient method for transport simulations in quantum cascade lasers

    Directory of Open Access Journals (Sweden)

    Maczka Mariusz

    2017-01-01

    Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.

  7. Cable Tension Preslack Method Construction Simulation and Engineering Application for a Prestressed Suspended Dome

    Directory of Open Access Journals (Sweden)

    Xuechun Liu

    2015-01-01

    Full Text Available To solve the shortage of traditional construction simulation methods for suspended dome structures, based on friction elements, node coupling technology, and local cooling, the cable tension preslack method is proposed in this paper, which is suitable for the whole process construction simulation of a suspended dome. This method was used to simulate the construction process of a large-span suspended dome case study. The effects on the simulation results of location deviation of joints, construction temperature, construction temporary supports, and friction of the cable-support joints were analyzed. The cable tension preslack method was demonstrated by comparing the data from the construction simulation with measured results, providing the control cable tension and the control standards for construction acceptance. The analysis demonstrated that the position deviation of the joint has little effect on the control value; the construction temperature and the friction of the cable-support joint significantly affect the control cable tension. The construction temperature, the temporary construction supports, and the friction of the cable-support joints all affect the internal force and deflection in the tensioned state but do not significantly affect the structural bearing characteristics at the load state. The forces should be primarily controlled in tensioned construction, while the deflections are controlled secondarily.

  8. Constraint methods that accelerate free-energy simulations of biomolecules.

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  9. How do rigid-lid assumption affect LES simulation results at high Reynolds flows?

    Science.gov (United States)

    Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration

    2017-11-01

    This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.

  10. Hybrid vortex simulations of wind turbines using a three-dimensional viscous-inviscid panel method

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Hejlesen, Mads Mølholm; Sørensen, Jens Nørkær

    2017-01-01

    adirect calculation, whereas the contribution from the large downstream wake is calculated using a mesh-based method. Thehybrid method is first validated in detail against the well-known MEXICO experiment, using the direct filament method asa comparison. The second part of the validation includes a study......A hybrid filament-mesh vortex method is proposed and validated to predict the aerodynamic performance of wind turbinerotors and to simulate the resulting wake. Its novelty consists of using a hybrid method to accurately simulate the wakedownstream of the wind turbine while reducing...

  11. A Comparative Study on the Refueling Simulation Method for a CANDU Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Do, Quang Binh; Choi, Hang Bok; Roh, Gyu Hong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The Canada deuterium uranium (CANDU) reactor calculation is typically performed by the RFSP code to obtain the power distribution upon a refueling. In order to assess the equilibrium behavior of the CANDU reactor, a few methods were suggested for a selection of the refueling channel. For example, an automatic refueling channel selection method (AUTOREFUEL) and a deterministic method (GENOVA) were developed, which were based on a reactor's operation experience and the generalized perturbation theory, respectively. Both programs were designed to keep the zone controller unit (ZCU) water level within a reasonable range during a continuous refueling simulation. However, a global optimization of the refueling simulation, that includes constraints on the discharge burn-up, maximum channel power (MCP), maximum bundle power (MBP), channel power peaking factor (CPPF) and the ZCU water level, was not achieved. In this study, an evolutionary algorithm, which is indeed a hybrid method based on the genetic algorithm, the elitism strategy and the heuristic rules for a multi-cycle and multi-objective optimization of the refueling simulation has been developed for the CANDU reactor. This paper presents the optimization model of the genetic algorithm and compares the results with those obtained by other simulation methods.

  12. Finite element method for one-dimensional rill erosion simulation on a curved slope

    Directory of Open Access Journals (Sweden)

    Lijuan Yan

    2015-03-01

    Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.

  13. Direct Numerical Simulation of the Rayleigh−Taylor Instability with the Spectral Element Method

    International Nuclear Information System (INIS)

    Xu, Zhang; Duo-Wang, Tan

    2009-01-01

    A novel method is proposed to simulate Rayleigh−Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier–Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh−Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh–Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh−Taylor instabilities of turbulent flows. (fundamental areas of phenomenology (including applications))

  14. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    Science.gov (United States)

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  15. Amyloid oligomer structure characterization from simulations: A general method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  16. Hybrid Method Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye

    This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...

  17. Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media

    Directory of Open Access Journals (Sweden)

    Jun Li

    2017-01-01

    Full Text Available An upscaled Lattice Boltzmann Method (LBM for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.

  18. Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media

    KAUST Repository

    Li, Jun

    2017-02-16

    An upscaled Lattice Boltzmann Method (LBM) for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution) and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.

  19. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  20. Validation of the intrinsic spatial efficiency method for non cylindrical homogeneous sources using MC simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)

    2016-07-07

    The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.

  1. Concentration gradient driven molecular dynamics: a new method for simulations of membrane permeation and separation.

    Science.gov (United States)

    Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur

    2017-05-01

    In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.

  2. DRK methods for time-domain oscillator simulation

    NARCIS (Netherlands)

    Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.

  3. Simulation of tandem hydrofoils by finite volume method with moving grid system; Henkei koshi wo tsukatta yugen taisekiho ni yoru tandem suichuyoku no simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kawashima, H. [Ship Research Inst., Tokyo (Japan); Miyata, H. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1996-12-31

    With an objective to clarify possibility of application of time-advancing calculated fluid dynamic (CFD) simulation by using a finite volume method with moving grid system, a simulation was performed on motion of a ship with hydrofoils including the control system therein. The simulation consists of a method that couples a moving grid system technology, an equation of motion, and the control system. Complex interactions between wings and with free surface may be considered automatically by directly deriving fluid force from a flow field by using the CFD. In addition, two-dimensional flows around tandem hydrofoils were calculated to solve the motion problem within a vertical plane. As a result, the following results were obtained: a finite volume method using a dynamic moving grid system method was applied to problems in non-steady tandem hydrofoils to show its usefulness; a method that couples the CFD with the equation of motion was applied to the control problems in the tandem hydrofoils to show possibility of a new technology for simulating motions; and a simulation that considers such wing interference as wave creation, discharged vortices, and associated flows was shown useful to understand characteristics of the tandem hydrofoils. 13 refs., 14 figs.

  4. Numerical simulation of stratified shear flow using a higher order Taylor series expansion method

    Energy Technology Data Exchange (ETDEWEB)

    Iwashige, Kengo; Ikeda, Takashi [Hitachi, Ltd. (Japan)

    1995-09-01

    A higher order Taylor series expansion method is applied to two-dimensional numerical simulation of stratified shear flow. In the present study, central difference scheme-like method is adopted for an even expansion order, and upwind difference scheme-like method is adopted for an odd order, and the expansion order is variable. To evaluate the effects of expansion order upon the numerical results, a stratified shear flow test in a rectangular channel (Reynolds number = 1.7x10{sup 4}) is carried out, and the numerical velocity and temperature fields are compared with experimental results measured by laser Doppler velocimetry thermocouples. The results confirm that the higher and odd order methods can simulate mean velocity distributions, root-mean-square velocity fluctuations, Reynolds stress, temperature distributions, and root-mean-square temperature fluctuations.

  5. Simulation Results of Double Forward Converter

    Directory of Open Access Journals (Sweden)

    P. Vijaya KUMAR

    2009-12-01

    Full Text Available This work aims to find a better forward converter for DC to DC conversion.Simulation of double forward converter in SMPS system is discussed in this paper. Aforward converter with RCD snubber to synchronous rectifier and/or to current doubleris also discussed. The evolution of the forward converter is first reviewed in a tutorialfashion. Performance parameters are discussed including operating principle, voltageconversion ratio, efficiency, device stress, small-signal dynamics, noise and EMI. Itscircuit operation and its performance characteristics of the forward converter with RCDsnubber and double forward converter are described and the simulation results arepresented.

  6. Numerical simulation methods for electron and ion optics

    International Nuclear Information System (INIS)

    Munro, Eric

    2011-01-01

    This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.

  7. Simulation methods for nuclear production scheduling

    International Nuclear Information System (INIS)

    Miles, W.T.; Markel, L.C.

    1975-01-01

    Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references

  8. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    Science.gov (United States)

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  9. Computational fluid dynamics simulations and validations of results

    CSIR Research Space (South Africa)

    Sitek, MA

    2013-09-01

    Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...

  10. Electrostatic plasma simulation by Particle-In-Cell method using ANACONDA package

    International Nuclear Information System (INIS)

    Blandón, J S; Grisales, J P; Riascos, H

    2017-01-01

    Electrostatic plasma is the most representative and basic case in plasma physics field. One of its main characteristics is its ideal behavior, since it is assumed be in thermal equilibrium state. Through this assumption, it is possible to study various complex phenomena such as plasma oscillations, waves, instabilities or damping. Likewise, computational simulation of this specific plasma is the first step to analyze physics mechanisms on plasmas, which are not at equilibrium state, and hence plasma is not ideal. Particle-In-Cell (PIC) method is widely used because of its precision for this kind of cases. This work, presents PIC method implementation to simulate electrostatic plasma by Python, using ANACONDA packages. The code has been corroborated comparing previous theoretical results for three specific phenomena in cold plasmas: oscillations, Two-Stream instability (TSI) and Landau Damping(LD). Finally, parameters and results are discussed. (paper)

  11. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    Science.gov (United States)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  12. Milestone M4900: Simulant Mixing Analytical Results

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, D.I.

    2001-07-26

    This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.

  13. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  14. Molecular Structural Transformation of 2:1 Clay Minerals by a Constant-Pressure Molecular Dynamics Simulation Method

    International Nuclear Information System (INIS)

    Wang, J.; Gutierre, M.S.

    2010-01-01

    This paper presents results of a molecular dynamics simulation study of dehydrated 2:1 clay minerals using the Parrinello-Rahman constant-pressure molecular dynamics method. The method is capable of simulating a system under the most general applied stress conditions by considering the changes of MD cell size and shape. Given the advantage of the method, it is the major goal of the paper to investigate the influence of imposed cell boundary conditions on the molecular structural transformation of 2:1 clay minerals under different normal pressures. Simulation results show that the degrees of freedom of the simulation cell (i.e., whether the cell size or shape change is allowed) determines the final equilibrated crystal structure of clay minerals. Both the MD method and the static method have successfully revealed unforeseen structural transformations of clay minerals upon relaxation under different normal pressures. It is found that large shear distortions of clay minerals occur when full allowance is given to the cell size and shape change. A complete elimination of the interlayer spacing is observed in a static simulation. However, when only the cell size change is allowed, interlayer spacing is retained, but large internal shear stresses also exist.

  15. A general method for closed-loop inverse simulation of helicopter maneuver flight

    Directory of Open Access Journals (Sweden)

    Wei WU

    2017-12-01

    Full Text Available Maneuverability is a key factor to determine whether a helicopter could finish certain flight missions successfully or not. Inverse simulation is commonly used to calculate the pilot controls of a helicopter to complete a certain kind of maneuver flight and to assess its maneuverability. A general method for inverse simulation of maneuver flight for helicopters with the flight control system online is developed in this paper. A general mathematical describing function is established to provide mathematical descriptions of different kinds of maneuvers. A comprehensive control solver based on the optimal linear quadratic regulator theory is developed to calculate the pilot controls of different maneuvers. The coupling problem between pilot controls and flight control system outputs is well solved by taking the flight control system model into the control solver. Inverse simulation of three different kinds of maneuvers with different agility requirements defined in the ADS-33E-PRF is implemented based on the developed method for a UH-60 helicopter. The results show that the method developed in this paper can solve the closed-loop inverse simulation problem of helicopter maneuver flight with high reliability as well as efficiency. Keywords: Closed-loop, Flying quality, Helicopters, Inverse simulation, Maneuver flight

  16. A mixed finite element method for particle simulation in lasertron

    International Nuclear Information System (INIS)

    Le Meur, G.

    1987-03-01

    A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

  17. A mixed finite element method for particle simulation in Lasertron

    International Nuclear Information System (INIS)

    Le Meur, G.

    1987-01-01

    A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

  18. Dynamical simulation of heavy ion collisions; VUU and QMD method

    International Nuclear Information System (INIS)

    Niita, Koji

    1992-01-01

    We review two simulation methods based on the Vlasov-Uehling-Uhlenbeck (VUU) equation and Quantum Molecular Dynamics (QMD), which are the most widely accepted theoretical framework for the description of intermediate-energy heavy-ion reactions. We show some results of the calculations and compare them with the experimental data. (author)

  19. A tool for simulating parallel branch-and-bound methods

    Science.gov (United States)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  20. Real time simulation method for fast breeder reactors dynamics

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.

    1985-01-01

    The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)

  1. A Multiscale Simulation Method and Its Application to Determine the Mechanical Behavior of Heterogeneous Geomaterials

    Directory of Open Access Journals (Sweden)

    Shengwei Li

    2017-01-01

    Full Text Available To study the micro/mesomechanical behaviors of heterogeneous geomaterials, a multiscale simulation method that combines molecular simulation at the microscale, a mesoscale analysis of polished slices, and finite element numerical simulation is proposed. By processing the mesostructure images obtained from analyzing the polished slices of heterogeneous geomaterials and mapping them onto finite element meshes, a numerical model that more accurately reflects the mesostructures of heterogeneous geomaterials was established by combining the results with the microscale mechanical properties of geomaterials obtained from the molecular simulation. This model was then used to analyze the mechanical behaviors of heterogeneous materials. Because kernstone is a typical heterogeneous material that comprises many types of mineral crystals, it was used for the micro/mesoscale mechanical behavior analysis in this paper using the proposed method. The results suggest that the proposed method can be used to accurately and effectively study the mechanical behaviors of heterogeneous geomaterials at the micro/mesoscales.

  2. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  3. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  4. Simulation of isothermal multi-phase fuel-coolant interaction using MPS method with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Gou, W.; Zhang, S.; Zheng, Y. [Zhejiang Univ., Hangzhou (China). Center for Engineering and Scientific Computation

    2016-07-15

    The energetic fuel-coolant interaction (FCI) has been one of the primary safety concerns in nuclear power plants. Graphical processing unit (GPU) implementation of the moving particle semi-implicit (MPS) method is presented and used to simulate the fuel coolant interaction problem. The governing equations are discretized with the particle interaction model of MPS. Detailed implementation on single-GPU is introduced. The three-dimensional broken dam is simulated to verify the developed GPU acceleration MPS method. The proposed GPU acceleration algorithm and developed code are then used to simulate the FCI problem. As a summary of results, the developed GPU-MPS method showed a good agreement with the experimental observation and theoretical prediction.

  5. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  6. Numerical simulation of 2D ablation profile in CCI-2 experiment by moving particle semi-implicit method

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Penghui, E-mail: phchai@vis.t.u-tokyo.ac.jp; Kondo, Masahiro; Erkan, Nejdet; Okamoto, Koji

    2016-05-15

    Highlights: • Multiphysics models were developed based on Moving Particle Semi-implicit method. • Mixing process, chemical reaction can be simulated in MCCI calculation. • CCI-2 experiment was simulated to validate the models. • Simulation and experimental results for sidewall ablation agree well. • Simulation results confirm the rapid erosion phenomenon observed in the experiment. - Abstract: Numerous experiments have been performed to explore the mechanisms of molten core-concrete interaction (MCCI) phenomena since the 1980s. However, previous experimental results show that uncertainties pertaining to several aspects such as the mixing process and crust behavior remain. To explore the mechanism governing such aspects, as well as to predict MCCI behavior in real severe accident events, a number of simulation codes have been developed for process calculations. However, uncertainties exist among the codes because of the use of different empirical models. In this study, a new computational code is developed using multiphysics models to simulate MCCI phenomena based on the moving particle semi-implicit (MPS) method. Momentum and energy equations are used to solve the velocity and temperature fields, and multiphysics models are developed on the basis of the basic MPS method. The CCI-2 experiment is simulated by applying the developed code. With respect to sidewall ablation, good agreement is observed between the simulation and experimental results. However, axial ablation is slower in the simulation, which is probably due to the underestimation of the enhancement effect of heat transfer provided by the moving bubbles at the bottom. In addition, the simulation results confirm the rapid erosion phenomenon observed in the experiment, which in the numerical simulation is explained by solutal convection provided by the liquid concrete at the corium/concrete interface. The results of the comparison of different model combinations show the effect of each

  7. Modeling and simulation of ocean wave propagation using lattice Boltzmann method

    Science.gov (United States)

    Nuraiman, Dian

    2017-10-01

    In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.

  8. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  9. Plasma simulations using the Car-Parrinello method

    International Nuclear Information System (INIS)

    Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.

    1990-01-01

    A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma

  10. Efficient method for time-domain simulation of the linear feedback systems containing fractional order controllers.

    Science.gov (United States)

    Merrikh-Bayat, Farshad

    2011-04-01

    One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Innovative teaching methods in the professional training of nurses – simulation education

    Directory of Open Access Journals (Sweden)

    Michaela Miertová

    2013-12-01

    Full Text Available Introduction: The article is aimed to highlight usage of innovative teaching methods within simulation education in the professional training of nurses abroad and to present our experience based on passing intensive study programme at School of Nursing, Midwifery and Social Work, University of Salford (United Kingdom, UK within Intensive EU Lifelong Learning Programme (LPP Erasmus EU RADAR 2013. Methods: Implementation of simulation methods such as role-play, case studies, simulation scenarios, practical workshops and clinical skills workstation within structured ABCDE approach (AIM© Assessment and Management Tool was aimed to promote the development of theoretical knowledge and skills to recognize and manage acutely deteriorated patients. Structured SBAR approach (Acute SBAR Communication Tool was used for the training of communication and information sharing among the members of multidisciplinary health care team. OSCE approach (Objective Structured Clinical Examination was used for student’s individual formative assessment. Results: Simulation education is proved to have lots of benefits in the professional training of nurses. It is held in safe, controlled and realistic conditions (in simulation laboratories reflecting real hospital and community care environment with no risk of harming real patients accompanied by debriefing, discussion and analysis of all activities students have performed within simulated scenario. Such learning environment is supportive, challenging, constructive, motivated, engaging, skilled, flexible, inspiring and respectful. Thus the simulation education is effective, interactive, interesting, efficient and modern way of nursing education. Conclusion: Critical thinking and clinical competences of nurses are crucial for early recognition and appropriate response to acute deterioration of patient’s condition. These competences are important to ensure the provision of high quality nursing care. Methods of

  12. Simulating condensation on microstructured surfaces using Lattice Boltzmann Method

    Science.gov (United States)

    Alexeev, Alexander; Vasyliv, Yaroslav

    2017-11-01

    We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.

  13. Prediction and evaluation method of wind environment in the early design stage using BIM-based CFD simulation

    International Nuclear Information System (INIS)

    Lee, Sumi; Song, Doosam

    2010-01-01

    Drastic urbanization and manhattanization are causing various problems in wind environment. This study suggests a CFD simulation method to evaluate wind environment in the early design stage of high-rise buildings. The CFD simulation of this study is not a traditional in-depth simulation, but a method to immediately evaluate wind environment for each design alternative and provide guidelines for design modification. Thus, the CFD simulation of this study to evaluate wind environments uses BIM-based CFD tools to utilize building models in the design stage. This study examined previous criteria to evaluate wind environment for pedestrians around buildings and selected evaluation criteria applicable to the CFD simulation method of this study. Furthermore, proper mesh generation method and CPU time were reviewed to find a meaningful CFD simulation result for determining optimal design alternative from the perspective of wind environment in the design stage. In addition, this study is to suggest a wind environment evaluation method through a BIM-based CFD simulation.

  14. Simulation of anisotropic diffusion by means of a diffusion velocity method

    CERN Document Server

    Beaudoin, A; Rivoalen, E

    2003-01-01

    An alternative method to the Particle Strength Exchange method for solving the advection-diffusion equation in the general case of a non-isotropic and non-uniform diffusion is proposed. This method is an extension of the diffusion velocity method. It is shown that this extension is quite straightforward due to the explicit use of the diffusion flux in the expression of the diffusion velocity. This approach is used to simulate pollutant transport in groundwater and the results are compared to those of the PSE method presented in an earlier study by Zimmermann et al.

  15. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  16. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  17. A tool for simulating parallel branch-and-bound methods

    Directory of Open Access Journals (Sweden)

    Golubeva Yana

    2016-01-01

    Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  18. Simulation of plume dynamics by the Lattice Boltzmann Method

    Science.gov (United States)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  19. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods

    International Nuclear Information System (INIS)

    Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.

    2008-01-01

    The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)

  20. Novel Methods for Electromagnetic Simulation and Design

    Science.gov (United States)

    2016-08-03

    modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow

  1. A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

    International Nuclear Information System (INIS)

    Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.

    2008-01-01

    A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient

  2. New chemical-DSMC method in numerical simulation of axisymmetric rarefied reactive flow

    Science.gov (United States)

    Zakeri, Ramin; Kamali Moghadam, Ramin; Mani, Mahmoud

    2017-04-01

    The modified quantum kinetic (MQK) chemical reaction model introduced by Zakeri et al. is developed for applicable cases in axisymmetric reactive rarefied gas flows using the direct simulation Monte Carlo (DSMC) method. Although, the MQK chemical model uses some modifications in the quantum kinetic (QK) method, it also employs the general soft sphere collision model and Stockmayer potential function to properly select the collision pairs in the DSMC algorithm and capture both the attraction and repulsion intermolecular forces in rarefied gas flows. For assessment of the presented model in the simulation of more complex and applicable reacting flows, first, the air dissociation is studied in a single cell for equilibrium and non-equilibrium conditions. The MQK results agree well with the analytical and experimental data and they accurately predict the characteristics of the rarefied flowfield with chemical reaction. To investigate accuracy of the MQK chemical model in the simulation of the axisymmetric flow, air dissociation is also assessed in an axial hypersonic flow around two geometries, the sphere as a benchmark case and the blunt body (STS-2) as an applicable test case. The computed results including the transient, rotational and vibrational temperatures, species concentration in the stagnation line, and also the heat flux and pressure coefficient on the surface are compared with those of the other chemical methods like the QK and total collision energy (TCE) models and available analytical and experimental data. Generally, the MQK chemical model properly simulates the chemical reactions and predicts flowfield characteristics more accurate rather than the typical QK model. Although in some cases, results of the MQK approaches match with those of the TCE method, the main point is that the MQK does not need any experimental data or unrealistic assumption of specular boundary condition as used in the TCE method. Another advantage of the MQK model is the

  3. Simulation of two-phase flow in horizontal fracture networks with numerical manifold method

    Science.gov (United States)

    Ma, G. W.; Wang, H. D.; Fan, L. F.; Wang, B.

    2017-10-01

    The paper presents simulation of two-phase flow in discrete fracture networks with numerical manifold method (NMM). Each phase of fluids is considered to be confined within the assumed discrete interfaces in the present method. The homogeneous model is modified to approach the mixed fluids. A new mathematical cover formation for fracture intersection is proposed to satisfy the mass conservation. NMM simulations of two-phase flow in a single fracture, intersection, and fracture network are illustrated graphically and validated by the analytical method or the finite element method. Results show that the motion status of discrete interface significantly depends on the ratio of mobility of two fluids rather than the value of the mobility. The variation of fluid velocity in each fracture segment and the driven fluid content are also influenced by the ratio of mobility. The advantages of NMM in the simulation of two-phase flow in a fracture network are demonstrated in the present study, which can be further developed for practical engineering applications.

  4. A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens

    Directory of Open Access Journals (Sweden)

    Cristina Portalés

    2017-06-01

    Full Text Available The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users.

  5. Application of Macro Response Monte Carlo method for electron spectrum simulation

    International Nuclear Information System (INIS)

    Perles, L.A.; Almeida, A. de

    2007-01-01

    During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations

  6. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    Science.gov (United States)

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise

  7. Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes

    International Nuclear Information System (INIS)

    Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P

    2007-01-01

    Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each

  8. Non-Destructive Evaluation Method Based On Dynamic Invariant Stress Resultants

    Directory of Open Access Journals (Sweden)

    Zhang Junchi

    2015-01-01

    Full Text Available Most of the vibration based damage detection methods are based on changes in frequencies, mode shapes, mode shape curvature, and flexibilities. These methods are limited and typically can only detect the presence and location of damage. Current methods seldom can identify the exact severity of damage to structures. This paper will present research in the development of a new non-destructive evaluation method to identify the existence, location, and severity of damage for structural systems. The method utilizes the concept of invariant stress resultants (ISR. The basic concept of ISR is that at any given cross section the resultant internal force distribution in a structural member is not affected by the inflicted damage. The method utilizes dynamic analysis of the structure to simulate direct measurements of acceleration, velocity and displacement simultaneously. The proposed dynamic ISR method is developed and utilized to detect the damage of corresponding changes in mass, damping and stiffness. The objectives of this research are to develop the basic theory of the dynamic ISR method, apply it to the specific types of structures, and verify the accuracy of the developed theory. Numerical results that demonstrate the application of the method will reflect the advanced sensitivity and accuracy in characterizing multiple damage locations.

  9. Improving the Stability and Accuracy of Power Hardware-in-the-Loop Simulation Using Virtual Impedance Method

    Directory of Open Access Journals (Sweden)

    Xiaoming Zha

    2016-11-01

    Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.

  10. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  11. Contribution of the ultrasonic simulation to the testing methods qualification process; Contribution de la modelisation ultrasonore au processus de qualification des methodes de controle

    Energy Technology Data Exchange (ETDEWEB)

    Le Ber, L.; Calmon, P. [CEA/Saclay, STA, 91 - Gif-sur-Yvette (France); Abittan, E. [Electricite de France (EDF-GDL), 93 - Saint-Denis (France)

    2001-07-01

    The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)

  12. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  13. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  14. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  15. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  16. Methods of channeling simulation

    International Nuclear Information System (INIS)

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs

  17. An efficient method of exploring simulation models by assimilating literature and biological observational data.

    Science.gov (United States)

    Hasegawa, Takanori; Nagasaki, Masao; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2014-07-01

    Recently, several biological simulation models of, e.g., gene regulatory networks and metabolic pathways, have been constructed based on existing knowledge of biomolecular reactions, e.g., DNA-protein and protein-protein interactions. However, since these do not always contain all necessary molecules and reactions, their simulation results can be inconsistent with observational data. Therefore, improvements in such simulation models are urgently required. A previously reported method created multiple candidate simulation models by partially modifying existing models. However, this approach was computationally costly and could not handle a large number of candidates that are required to find models whose simulation results are highly consistent with the data. In order to overcome the problem, we focused on the fact that the qualitative dynamics of simulation models are highly similar if they share a certain amount of regulatory structures. This indicates that better fitting candidates tend to share the basic regulatory structure of the best fitting candidate, which can best predict the data among candidates. Thus, instead of evaluating all candidates, we propose an efficient explorative method that can selectively and sequentially evaluate candidates based on the similarity of their regulatory structures. Furthermore, in estimating the parameter values of a candidate, e.g., synthesis and degradation rates of mRNA, for the data, those of the previously evaluated candidates can be utilized. The method is applied here to the pharmacogenomic pathways for corticosteroids in rats, using time-series microarray expression data. In the performance test, we succeeded in obtaining more than 80% of consistent solutions within 15% of the computational time as compared to the comprehensive evaluation. Then, we applied this approach to 142 literature-recorded simulation models of corticosteroid-induced genes, and consequently selected 134 newly constructed better models. The

  18. Commissioning methods applied to the Hunterston 'B' AGR operator training simulator

    International Nuclear Information System (INIS)

    Hacking, D.

    1985-01-01

    The Hunterston 'B' full scope AGR Simulator, built for the South of Scotland Electricity Board by Marconi Instruments, encompasses all systems under direct and indirect control of the Hunterston central control room operators. The resulting breadth and depth of simulation together with the specification for the real time implementation of a large number of highly interactive detailed plant models leads to the classic problem of identifying acceptance and acceptability criteria. For example, whilst the ultimate criterion for acceptability must clearly be that within the context of the training requirement the simulator should be indistinguishable from the actual plant, far more measurable (i.e. less subjective) statements are required if a formal contractual acceptance condition is to be achieved. Within the framework, individual models and processes can have radically different acceptance requirements which therefore reflect on the commissioning approach applied. This paper discusses the application of a combination of quality assurance methods, design code results, plant data, theoretical analysis and operator 'feel' in the commissioning of the Hunterston 'B' AGR Operator Training Simulator. (author)

  19. Quench simulation results for a 12-T twin-aperture dipole magnet

    Science.gov (United States)

    Cheng, Da; Salmi, Tiina; Xu, Qingjin; Peng, Quanling; Wang, Chengtao; Wang, Yingzhe; Kong, Ershuai; Zhang, Kai

    2018-06-01

    A 12-T twin-aperture subscale dipole magnet is being developed for SPPC pre-study at the Institute of High Energy Physics (IHEP). The magnet is comprised of 6 double-pancake coils which include 2 Nb3Sn coils and 4 NbTi coils. As the stored energy of the magnet is 0.452 MJ and the operation margin is only about 20% at 4.2 K, a quick and effective quench protection system is necessary during the test of this high field magnet. For the design of the quench protection system, attention was not only paid to the hotspot temperature and terminal voltage, but also the temperature gradient during the quench process due to the poor mechanical characteristics of the Nb3Sn cables. With the adiabatic analysis, numerical simulation and the finite element simulation, an optimized protection method is adopted, which contains a dump resistor and quench heaters. In this paper, the results of adiabatic analysis and quench simulation, such as current decay, hot-spot temperature and terminal voltage are presented in details.

  20. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  2. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  3. A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.

    Science.gov (United States)

    Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L

    2018-05-16

    During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.

  4. Research on Monte Carlo simulation method of industry CT system

    International Nuclear Information System (INIS)

    Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan

    2010-01-01

    There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)

  5. Simulation of ion behavior in an open three-dimensional Paul trap using a power series method

    Energy Technology Data Exchange (ETDEWEB)

    Herbane, Mustapha Said, E-mail: mherbane@hotmail.com [King Khalid University, Faculty of Science, Department of Physics, P.O. Box 9004, Abha (Saudi Arabia); Berriche, Hamid [King Khalid University, Faculty of Science, Department of Physics, P.O. Box 9004, Abha (Saudi Arabia); Laboratoire des Interfaces et Matériaux Avancés, Physics Department, College of Science, University of Monastir, 5019 Monastir (Tunisia); Abd El-hady, Alaa [King Khalid University, Faculty of Science, Department of Physics, P.O. Box 9004, Abha (Saudi Arabia); Department of Physics, Faculty of Science, Zagazig University, Zagazig 44519 (Egypt); Al Shahrani, Ghadah [King Khalid University, Faculty of Science, Department of Physics, P.O. Box 9004, Abha (Saudi Arabia); Ban, Gilles; Fléchard, Xavier; Liénard, Etienne [LPC CAEN-ENSICAEN, 6 Boulevard du Marechal Juin, 14050 Caen Cedex (France)

    2014-07-01

    Simulations of the dynamics of ions trapped in a Paul trap with terms in the potential up to the order 10 have been carried out. The power series method is used to solve numerically the equations of motion of the ions. The stability diagram has been studied and the buffer gas cooling has been implemented by a Monte Carlo method. The dipole excitation was also included. The method has been applied to an existing trap and it has shown good agreement with the experimental results and previous simulations using other methods. - Highlights: • Paul trap with potentials up to the order 10. • Series solution of the ions equations of motion. • Hard sphere model for the simulation of the buffer gas cooling and simulation of the dipolar excitation.

  6. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    Science.gov (United States)

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  7. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    Directory of Open Access Journals (Sweden)

    Jesus Montes

    Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of

  8. Simulation and Verificaiton of Flow in Test Methods

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica

    2005-01-01

    Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...

  9. Implementation of Simulation Based-Concept Attainment Method to Increase Interest Learning of Engineering Mechanics Topic

    Science.gov (United States)

    Sultan, A. Z.; Hamzah, N.; Rusdi, M.

    2018-01-01

    The implementation of concept attainment method based on simulation was used to increase student’s interest in the subjects Engineering of Mechanics in second semester of academic year 2016/2017 in Manufacturing Engineering Program, Department of Mechanical PNUP. The result of the implementation of this learning method shows that there is an increase in the students’ learning interest towards the lecture material which is summarized in the form of interactive simulation CDs and teaching materials in the form of printed books and electronic books. From the implementation of achievement method of this simulation based concept, it is noted that the increase of student participation in the presentation and discussion as well as the deposit of individual assignment of significant student. With the implementation of this method of learning the average student participation reached 89%, which before the application of this learning method only reaches an average of 76%. And also with previous learning method, for exam achievement of A-grade under 5% and D-grade above 8%. After the implementation of the new learning method (simulation based-concept attainment method) the achievement of Agrade has reached more than 30% and D-grade below 1%.

  10. Simulation of granular and gas-solid flows using discrete element method

    Science.gov (United States)

    Boyalakuntla, Dhanunjay S.

    2003-10-01

    In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D

  11. Titan's organic chemistry: Results of simulation experiments

    Science.gov (United States)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  12. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    Science.gov (United States)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  13. Detailed balance method for chemical potential determination in Monte Carlo and molecular dynamics simulations

    International Nuclear Information System (INIS)

    Fay, P.J.; Ray, J.R.; Wolf, R.J.

    1994-01-01

    We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature

  14. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  15. A multiscale quantum mechanics/electromagnetics method for device simulations.

    Science.gov (United States)

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  16. A quality assurance program of simulators in radiotherapy. Pt. 2. Extent and results of long-term quality assurance tests on a therapy simulator

    International Nuclear Information System (INIS)

    Mueller-Sievers, K.; Kober, B.

    1997-01-01

    Background: Since 1990 we follow a quality assurance program with periodical tests of functional performance values of a 16-year-old simulator. Material and Method: For this purpose we adopted and modified German standards for quality assurance on linear accelerators and international standards elaborated for simulators (International Electrotechnical Commission). The tests are subdivided into daily visual checks (light field indication, optical distance indicator, isocentre-indicating devices, indication of gantry and collimator angles) and monthly and annually tests of relevant simulator parameters. Some important examples demonstrate the small variation of parameters over 6 years: Position of the light field centre when rotating the collimator, diameter of the isocentre circle when rotating the gantry, accuracy of the isocentre indication device, and coincidence of light field and simulated radiation field. Results: As an important result we can state, that by these rigid periodic tests it was possible to detect and compensate deteriorations of simulators quality rapidly. Conclusions: Technical improvements and specific calling-in of maintenance personnel whenever felt appropriate provided performance characteristics of our old simulator which are required by international recommendations as a basis for modern radiotherapy. (orig.) [de

  17. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  18. Methods for simulation-based analysis of fluid-structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  19. A method to solve the aircraft magnetic field model basing on geomagnetic environment simulation

    International Nuclear Information System (INIS)

    Lin, Chunsheng; Zhou, Jian-jun; Yang, Zhen-yu

    2015-01-01

    In aeromagnetic survey, it is difficult to solve the aircraft magnetic field model by flying for some unman controlled or disposable aircrafts. So a model solving method on the ground is proposed. The method simulates the geomagnetic environment where the aircraft is flying and creates the background magnetic field samples which is the same as the magnetic field arose by aircraft’s maneuvering. Then the aircraft magnetic field model can be solved by collecting the magnetic field samples. The method to simulate the magnetic environment and the method to control the errors are presented as well. Finally, an experiment is done for verification. The result shows that the model solving precision and stability by the method is well. The calculated model parameters by the method in one district can be used in worldwide districts as well. - Highlights: • A method to solve the aircraft magnetic field model on the ground is proposed. • The method solves the model by simulating dynamic geomagnetic environment as in the real flying. • The way to control the error of the method was analyzed. • An experiment is done for verification

  20. Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.

    2006-01-01

    Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations

  1. Evaluation of element migration from food plastic packagings into simulated solutions using radiometric method

    International Nuclear Information System (INIS)

    Soares, Eufemia Paez; Saiki, Mitiko; Wiebeck, Helio

    2005-01-01

    In the present study a radiometric method was established to determine the migration of elements from food plastic packagings to a simulated acetic acid solution. This radiometric method consisted of irradiating plastic samples with neutrons at IEA-R1 nuclear reactor for a period of 16 hours under a neutron flux of 10 12 n cm -2 s -1 and, then to expose them to the element migration into a simulated solution. The radioactivity of the activated elements transferred to the solutions was measured to evaluate the migration. The experimental conditions were: time of exposure of 10 days at 40 deg C and 3% acetic acid solution was used as simulated solution, according to the procedure established by the National Agency of Sanitary Monitoring (ANVISA). The migration study was applied for plastic samples from soft drink and juice packagings. The results obtained indicated the migration of elements Co, Cr and Sb. The advantage of this methodology was no need to analyse the blank of simulantes, as well as the use of high purity simulated solutions. Besides, the method allows to evaluate the migration of the elements into the food content instead of simulated solution. The detention limits indicated high sensitivity of the radiometric method. (author)

  2. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    Science.gov (United States)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  3. Simulation of bubble motion under gravity by lattice Boltzmann method

    International Nuclear Information System (INIS)

    Takada, Naoki; Misawa, Masaki; Tomiyama, Akio; Hosokawa, Shigeo

    2001-01-01

    We describe the numerical simulation results of bubble motion under gravity by the lattice Boltzmann method (LBM), which assumes that a fluid consists of mesoscopic fluid particles repeating collision and translation and a multiphase interface is reproduced in a self-organizing way by repulsive interaction between different kinds of particles. The purposes in this study are to examine the applicability of LBM to the numerical analysis of bubble motions, and to develop a three-dimensional version of the binary fluid model that introduces a free energy function. We included the buoyancy terms due to the density difference in the lattice Boltzmann equations, and simulated single-and two-bubble motions, setting flow conditions according to the Eoetvoes and Morton numbers. The two-dimensional results by LBM agree with those by the Volume of Fluid method based on the Navier-Stokes equations. The three-dimensional model possesses the surface tension satisfying the Laplace's law, and reproduces the motion of single bubble and the two-bubble interaction of their approach and coalescence in circular tube. There results prove that the buoyancy terms and the 3D model proposed here are suitable, and that LBM is useful for the numerical analysis of bubble motion under gravity. (author)

  4. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  5. Face-based smoothed finite element method for real-time simulation of soft tissue

    Science.gov (United States)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  6. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  7. Numerical simulation of jet breakup behavior by the lattice Boltzmann method

    International Nuclear Information System (INIS)

    Matsuo, Eiji; Koyama, Kazuya; Abe, Yutaka; Iwasawa, Yuzuru; Ebihara, Ken-ichi

    2015-01-01

    In order to understand the jet breakup behavior of the molten core material into coolant during a core disruptive accident (CDA) for a sodium-cooled fast reactor (SFR), we simulated the jet breakup due to the hydrodynamic interaction using the lattice Boltzmann method (LBM). The applicability of the LBM to the jet breakup simulation was validated by comparison with our experimental data. In addition, the influence of several dimensionless numbers such as Weber number and Froude number was examined using the LBM. As a result, we validated applicability of the LBM to the jet breakup simulation, and found that the jet breakup length is independent of Froude number and in good agreement with the Epstein's correlation when the jet interface becomes unstable. (author)

  8. Medical Simulation Practices 2010 Survey Results

    Science.gov (United States)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  9. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    Science.gov (United States)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  10. Electron-cloud simulation results for the PSR and SNS

    International Nuclear Information System (INIS)

    Pivi, M.; Furman, M.A.

    2002-01-01

    We present recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos. In particular, a complete refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has been included in the simulation code

  11. Comparison of the quasi-static method and the dynamic method for simulating fracture processes in concrete

    Science.gov (United States)

    Liu, J. X.; Deng, S. C.; Liang, N. G.

    2008-02-01

    Concrete is heterogeneous and usually described as a three-phase material, where matrix, aggregate and interface are distinguished. To take this heterogeneity into consideration, the Generalized Beam (GB) lattice model is adopted. The GB lattice model is much more computationally efficient than the beam lattice model. Numerical procedures of both quasi-static method and dynamic method are developed to simulate fracture processes in uniaxial tensile tests conducted on a concrete panel. Cases of different loading rates are compared with the quasi-static case. It is found that the inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, an unrealistic result will be obtained if a fracture process including unstable cracking is simulated by the quasi-static procedure.

  12. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  13. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  14. Comparison the Results of Numerical Simulation And Experimental Results for Amirkabir Plasma Focus Facility

    Science.gov (United States)

    Goudarzi, Shervin; Amrollahi, R.; Niknam Sharak, M.

    2014-06-01

    In this paper the results of the numerical simulation for Amirkabir Mather-type Plasma Focus Facility (16 kV, 36μF and 115 nH) in several experiments with Argon as working gas at different working conditions (different discharge voltages and gas pressures) have been presented and compared with the experimental results. Two different models have been used for simulation: five-phase model of Lee and lumped parameter model of Gonzalez. It is seen that the results (optimum pressures and current signals) of the Lee model at different working conditions show better agreement than lumped parameter model with experimental values.

  15. Comparison the results of numerical simulation and experimental results for Amirkabir plasma focus facility

    International Nuclear Information System (INIS)

    Goudarzi, Shervin; Amrollahi, R; Sharak, M Niknam

    2014-01-01

    In this paper the results of the numerical simulation for Amirkabir Mather-type Plasma Focus Facility (16 kV, 36μF and 115 nH) in several experiments with Argon as working gas at different working conditions (different discharge voltages and gas pressures) have been presented and compared with the experimental results. Two different models have been used for simulation: five-phase model of Lee and lumped parameter model of Gonzalez. It is seen that the results (optimum pressures and current signals) of the Lee model at different working conditions show better agreement than lumped parameter model with experimental values.

  16. Evaluation of FTIR-based analytical methods for the analysis of simulated wastes

    International Nuclear Information System (INIS)

    Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.

    1994-01-01

    Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data

  17. Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study

    Directory of Open Access Journals (Sweden)

    In Sung Cho

    2017-08-01

    Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.

  18. A new algorithm for the simulation of the Boltzmann equation using the direct simulation monte-carlo method

    International Nuclear Information System (INIS)

    Ganjaei, A. A.; Nourazar, S. S.

    2009-01-01

    A new algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, for the simulation of Couette- Taylor gas flow problem is developed. The Taylor series expansion is used to obtain the modified equation of the first order time discretization of the collision equation and the new algorithm, MDSMC, is implemented to simulate the collision equation in the Boltzmann equation. In the new algorithm (MDSMC) there exists a new extra term which takes in to account the effect of the second order collision. This new extra term has the effect of enhancing the appearance of the first Taylor instabilities of vortices streamlines. In the new algorithm (MDSMC) there also exists a second order term in time step in the probabilistic coefficients which has the effect of simulation with higher accuracy than the previous DSMC algorithm. The appearance of the first Taylor instabilities of vortices streamlines using the MDSMC algorithm at different ratios of ω/ν (experimental data of Taylor) occurred at less time-step than using the DSMC algorithm. The results of the torque developed on the stationary cylinder using the MDSMC algorithm show better agreement in comparison with the experimental data of Kuhlthau than the results of the torque developed on the stationary cylinder using the DSMC algorithm

  19. Methods and models for accelerating dynamic simulation of fluid power circuits

    Energy Technology Data Exchange (ETDEWEB)

    Aaman, R.

    2011-07-01

    The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, two mechanisms which make the system stiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation

  20. Direct numerical simulation of circular-cap bubbles in low viscous liquids using counter diffusion lattice Boltzmann method

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Seungyeob, E-mail: syryu@kaeri.re.kr [Korea Atomic Energy Research Institute (KAERI), 1045 Daeduk-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Youngin; Yoon, Juhyeon [Korea Atomic Energy Research Institute (KAERI), 1045 Daeduk-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Sungho, E-mail: sunghoko@cnu.ac.kr [Department of Mechanical Design Engineering, Chungnam National University, 220 Gung-dong, Yuseong-gu, Daejeon 305-764 (Korea, Republic of)

    2014-01-15

    Highlights: • We directly simulate circular-cap bubbles in low viscous liquids. • The counter diffusion multiphase lattice Boltzmann method is proposed. • The present method is validated through benchmark tests and experimental results. • The high-Reynolds-number bubbles can be simulated without any turbulence models. • The present method is feasible for the direct simulation of bubbly flows. -- Abstract: The counter diffusion lattice Boltzmann method (LBM) is used to directly simulate rising circular-cap bubbles in low viscous liquids. A counter diffusion model for single phase flows has been extended to multiphase flows, and the implicit formulation is converted into an explicit one for easy calculation. Bubbles at high Reynolds numbers ranging from O(10{sup 2}) to O(10{sup 4}) are simulated successfully without any turbulence models, which cannot be done for the existing LBM versions. The characteristics of the circular-cap bubbles are studied for a wide range of Morton numbers and compared with the previous literature. Calculated results agree with the theoretical and experimental data. Consequently, the wake phenomena of circular-cap bubbles and bubble induced turbulence are presented.

  1. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    Science.gov (United States)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  2. Numerical methods for the simulation of continuous sedimentation in ideal clarifier-thickener units

    Energy Technology Data Exchange (ETDEWEB)

    Buerger, R.; Karlsen, K.H.; Risebro, N.H.; Towers, J.D.

    2001-10-01

    We consider a model of continuous sedimentation. Under idealizing assumptions, the settling of the solid particles under the influence of gravity can be described by the initial value problem for a nonlinear hyperbolic partial differential equation with a flux function that depends discontinuously on height. The purpose of this contribution is to present and demonstrate two numerical methods for simulating continuous sedimentation: a front tracking method and a finite finite difference method. The basic building blocks in the front tracking method are the solutions of a finite number of certain Riemann problems and a procedure for tracking local collisions of shocks. The solutions of the Riemann problems are recalled herein and the front tracking algorithm is described. As an alternative to the front tracking method, a simple scalar finite difference algorithm is proposed. This method is based on discretizing the spatially varying flux parameters on a mesh that is staggered with respect to that of the conserved variable, resulting in a straightforward generalization of the well-known Engquist-Osher upwind finite difference method. The result is an easily implemented upwind shock capturing method. Numerical examples demonstrate that the front tracking and finite difference methods can be used as efficient and accurate simulation tools for continuous sedimentation. The numerical results for the finite difference method indicate that discontinuities in the local solids concentration are resolved sharply and agree with those produced by the front tracking method. The latter is free of numerical dissipation, which leads to sharply resolved concentration discontinuities, but is more complicated to implement than the former. Available mathematical results for the proposed numerical methods are also briefly reviewed. (author)

  3. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  4. Comparing Simulations and Observations of Galaxy Evolution: Methods for Constraining the Nature of Stellar Feedback

    Science.gov (United States)

    Hummels, Cameron

    Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic

  5. System dynamic simulation: A new method in social impact assessment (SIA)

    International Nuclear Information System (INIS)

    Karami, Shobeir; Karami, Ezatollah; Buys, Laurie; Drogemuller, Robin

    2017-01-01

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  6. System dynamic simulation: A new method in social impact assessment (SIA)

    Energy Technology Data Exchange (ETDEWEB)

    Karami, Shobeir, E-mail: shobeirkarami@gmail.com [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Karami, Ezatollah, E-mail: ekarami@shirazu.ac.ir [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Buys, Laurie, E-mail: l.buys@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia); Drogemuller, Robin, E-mail: robin.drogemuller@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia)

    2017-01-15

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  7. Simulation of regimes of convection and plume dynamics by the thermal Lattice Boltzmann Method

    Science.gov (United States)

    Mora, Peter; Yuen, David A.

    2018-02-01

    We present 2D simulations using the Lattice Boltzmann Method (LBM) of a fluid in a rectangular box being heated from below, and cooled from above. We observe plumes, hot narrow upwellings from the base, and down-going cold chutes from the top. We have varied both the Rayleigh numbers and the Prandtl numbers respectively from Ra = 1000 to Ra =1010 , and Pr = 1 through Pr = 5 ×104 , leading to Rayleigh-Bénard convection cells at low Rayleigh numbers through to vigorous convection and unstable plumes with pronounced vortices and eddies at high Rayleigh numbers. We conduct simulations with high Prandtl numbers up to Pr = 50, 000 to simulate in the inertial regime. We find for cases when Pr ⩾ 100 that we obtain a series of narrow plumes of upwelling fluid with mushroom heads and chutes of downwelling fluid. We also present simulations at a Prandtl number of 0.7 for Rayleigh numbers varying from Ra =104 through Ra =107.5 . We demonstrate that the Nusselt number follows power law scaling of form Nu ∼Raγ where γ = 0.279 ± 0.002 , which is consistent with published results of γ = 0.281 in the literature. These results show that the LBM is capable of reproducing results obtained with classical macroscopic methods such as spectral methods, and demonstrate the great potential of the LBM for studying thermal convection and plume dynamics relevant to geodynamics.

  8. Long-time atomistic simulations with the Parallel Replica Dynamics method

    Science.gov (United States)

    Perez, Danny

    Molecular Dynamics (MD) -- the numerical integration of atomistic equations of motion -- is a workhorse of computational materials science. Indeed, MD can in principle be used to obtain any thermodynamic or kinetic quantity, without introducing any approximation or assumptions beyond the adequacy of the interaction potential. It is therefore an extremely powerful and flexible tool to study materials with atomistic spatio-temporal resolution. These enviable qualities however come at a steep computational price, hence limiting the system sizes and simulation times that can be achieved in practice. While the size limitation can be efficiently addressed with massively parallel implementations of MD based on spatial decomposition strategies, allowing for the simulation of trillions of atoms, the same approach usually cannot extend the timescales much beyond microseconds. In this article, we discuss an alternative parallel-in-time approach, the Parallel Replica Dynamics (ParRep) method, that aims at addressing the timescale limitation of MD for systems that evolve through rare state-to-state transitions. We review the formal underpinnings of the method and demonstrate that it can provide arbitrarily accurate results for any definition of the states. When an adequate definition of the states is available, ParRep can simulate trajectories with a parallel speedup approaching the number of replicas used. We demonstrate the usefulness of ParRep by presenting different examples of materials simulations where access to long timescales was essential to access the physical regime of interest and discuss practical considerations that must be addressed to carry out these simulations. Work supported by the United States Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.

  9. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  10. Advance in research on aerosol deposition simulation methods

    International Nuclear Information System (INIS)

    Liu Keyang; Li Jingsong

    2011-01-01

    A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)

  11. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  12. Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.

  13. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  14. Proteus: a direct forcing method in the simulations of particulate flows

    Science.gov (United States)

    Feng, Zhi-Gang; Michaelides, Efstathios E.

    2005-01-01

    A new and efficient direct numerical method for the simulation of particulate flows is introduced. The method combines desired elements of the immersed boundary method, the direct forcing method and the lattice Boltzmann method. Adding a forcing term in the momentum equation enforces the no-slip condition on the boundary of a moving particle. By applying the direct forcing scheme, Proteus eliminates the need for the determination of free parameters, such as the stiffness coefficient in the penalty scheme or the two relaxation parameters in the adaptive-forcing scheme. The method presents a significant improvement over the previously introduced immersed-boundary-lattice-Boltzmann method (IB-LBM) where the forcing term was computed using a penalty method and a user-defined parameter. The method allows the enforcement of the rigid body motion of a particle in a more efficient way. Compared to the "bounce-back" scheme used in the conventional LBM, the direct-forcing method provides a smoother computational boundary for particles and is capable of achieving results at higher Reynolds number flows. By using a set of Lagrangian points to track the boundary of a particle, Proteus eliminates any need for the determination of the boundary nodes that are prescribed by the "bounce-back" scheme at every time step. It also makes computations for particles of irregular shapes simpler and more efficient. Proteus has been developed in two- as well as three-dimensions. This new method has been validated by comparing its results with those from experimental measurements for a single sphere settling in an enclosure under gravity. As a demonstration of the efficiency and capabilities of the present method, the settling of a large number (1232) of spherical particles is simulated in a narrow box under two different boundary conditions. It is found that when the no-slip boundary condition is imposed at the front and rear sides of the box the particles motion is significantly hindered

  15. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  16. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.

    2014-05-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.

  17. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    Science.gov (United States)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  18. Electromagnetic simulation using the FDTD method

    CERN Document Server

    Sullivan, Dennis M

    2013-01-01

    A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp

  19. Two methods to simulate intrapulpal pressure: effects upon bonding performance of self-etch adhesives.

    Science.gov (United States)

    Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B

    2014-09-01

    To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  20. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  1. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  2. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  3. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  4. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  5. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  6. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  7. Research on neutron noise analysis stochastic simulation method for α calculation

    International Nuclear Information System (INIS)

    Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang

    2014-01-01

    The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)

  8. An Assessment of Mean Areal Precipitation Methods on Simulated Stream Flow: A SWAT Model Performance Assessment

    Directory of Open Access Journals (Sweden)

    Sean Zeiger

    2017-06-01

    Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.

  9. Finite element method for simulation of the semiconductor devices

    International Nuclear Information System (INIS)

    Zikatanov, L.T.; Kaschiev, M.S.

    1991-01-01

    An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs

  10. Simulation of Corrosion Process for Structure with the Cellular Automata Method

    Science.gov (United States)

    Chen, M. C.; Wen, Q. Q.

    2017-06-01

    In this paper, from the mesoscopic point of view, under the assumption of metal corrosion damage evolution being a diffusive process, the cellular automata (CA) method was proposed to simulate numerically the uniform corrosion damage evolution of outer steel tube of concrete filled steel tubular columns subjected to corrosive environment, and the effects of corrosive agent concentration, dissolution probability and elapsed etching time on the corrosion damage evolution were also investigated. It was shown that corrosion damage increases nonlinearly with increasing elapsed etching time, and the longer the etching time, the more serious the corrosion damage; different concentration of corrosive agents had different impacts on the corrosion damage degree of the outer steel tube, but the difference between the impacts was very small; the heavier the concentration, the more serious the influence. The greater the dissolution probability, the more serious the corrosion damage of the outer steel tube, but with the increase of dissolution probability, the difference between its impacts on the corrosion damage became smaller and smaller. To validate present method, corrosion damage measurements for concrete filled square steel tubular columns (CFSSTCs) sealed at both their ends and immersed fully in a simulating acid rain solution were conducted, and Faraday’s law was used to predict their theoretical values. Meanwhile, the proposed CA mode was applied for the simulation of corrosion damage evolution of the CFSSTCs. It was shown by the comparisons of results from the three methods aforementioned that they were in good agreement, implying that the proposed method used for the simulation of corrosion damage evolution of concrete filled steel tubular columns is feasible and effective. It will open a new approach to study and evaluate further the corrosion damage, loading capacity and lifetime prediction of concrete filled steel tubular structures.

  11. Comparison of a Material Point Method and a Galerkin Meshfree Method for the Simulation of Cohesive-Frictional Materials

    Directory of Open Access Journals (Sweden)

    Ilaria Iaconeta

    2017-09-01

    Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.

  12. WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method

    Science.gov (United States)

    Crevoisier, David; Voltz, Marc

    2013-04-01

    To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute

  13. TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222

    International Nuclear Information System (INIS)

    Shen, H.; Li, Z.; Wang, K.; Yu, G.

    2010-01-01

    A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)

  14. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    Science.gov (United States)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a

  15. DoSSiER: Database of Scientific Simulation and Experimental Results

    CERN Document Server

    Wenzel, Hans; Genser, Krzysztof; Elvira, Daniel; Pokorski, Witold; Carminati, Federico; Konstantinov, Dmitri; Ribon, Alberto; Folger, Gunter; Dotti, Andrea

    2017-01-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  16. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Directory of Open Access Journals (Sweden)

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  17. Research methods of simulate digital compensators and autonomous control systems

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.

  18. A simulation training evaluation method for distribution network fault based on radar chart

    Directory of Open Access Journals (Sweden)

    Yuhang Xu

    2018-01-01

    Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.

  19. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  20. The Numerical Welding Simulation - Developments and Validation of Simplified and Bead Lumping Methods

    International Nuclear Information System (INIS)

    Baup, Olivier

    2001-01-01

    The aim of this work was to study the TIG multipass welding process on stainless steel, by means of numerical methods and then to work out simplified and bead lumping methods in order to reduce adjusting and realisation times of these calculations. A simulation was used as reference for the validation of these methods; after the presentation of the test series having led to the option choices of this calculation (2D generalised plane strains, elastoplastic model with an isotropic hardening, hardening restoration due to high temperatures), various simplifications were tried on a plate geometry. These simplifications related various modelling points with a correct plastic flow representation in the plate. The use of a reduced number of thermal fields characterising the bead deposit and a low number of tensile curves allow to obtain interesting results, decreasing significantly the Computing times. In addition various lumping bead methods have been studied and concerning both the shape and the thermic of the macro-deposits. The macro-deposit shapes studied are in 'L', or in layer or they represent two beads one on top of the other. Among these three methods, only those using a few number of lumping beads gave bad results since thermo-mechanical history was deeply modified near and inside the weld. Thereafter, simplified methods have been applied to a tubular geometry. On this new geometry, experimental measurements were made during welding, which allow a validation of the reference calculation. Simplified and reference calculations gave approximately the same stress fields as found on plate geometry. Finally, in the last part of this document a procedure for automatic data setting permitting to reduce significantly the calculation phase preparation is presented. It has been applied to the calculation of thick pipe welding in 90 beads; the results are compared with a simplified simulation realised by Framatome and with experimental measurements. A bead by

  1. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  2. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Prosperetti, Andrea [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrative simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.

  3. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    Science.gov (United States)

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Application of State Quantization-Based Methods in HEP Particle Transport Simulation

    Science.gov (United States)

    Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo

    2017-10-01

    Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.

  5. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    Science.gov (United States)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  6. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  7. Global mass conservation method for dual-continuum gas reservoir simulation

    KAUST Repository

    Wang, Yi; Sun, Shuyu; Gong, Liang; Yu, Bo

    2018-01-01

    In this paper, we find that the numerical simulation of gas flow in dual-continuum porous media may generate unphysical or non-robust results using regular finite difference method. The reason is the unphysical mass loss caused by the gas compressibility and the non-diagonal dominance of the discretized equations caused by the non-linear well term. The well term contains the product of density and pressure. For oil flow, density is independent of pressure so that the well term is linear. For gas flow, density is related to pressure by the gas law so that the well term is non-linear. To avoid these two problems, numerical methods are proposed using the mass balance relation and the local linearization of the non-linear source term to ensure the global mass conservation and the diagonal dominance of discretized equations in the computation. The proposed numerical methods are successfully applied to dual-continuum gas reservoir simulation. Mass conservation is satisfied while the computation becomes robust. Numerical results show that the location of the production well relative to the large-permeability region is very sensitive to the production efficiency. It decreases apparently when the production well is moved from the large-permeability region to the small-permeability region, even though the well is very close to the interface of the two regions. The production well is suggested to be placed inside the large-permeability region regardless of the specific position.

  8. Global mass conservation method for dual-continuum gas reservoir simulation

    KAUST Repository

    Wang, Yi

    2018-03-17

    In this paper, we find that the numerical simulation of gas flow in dual-continuum porous media may generate unphysical or non-robust results using regular finite difference method. The reason is the unphysical mass loss caused by the gas compressibility and the non-diagonal dominance of the discretized equations caused by the non-linear well term. The well term contains the product of density and pressure. For oil flow, density is independent of pressure so that the well term is linear. For gas flow, density is related to pressure by the gas law so that the well term is non-linear. To avoid these two problems, numerical methods are proposed using the mass balance relation and the local linearization of the non-linear source term to ensure the global mass conservation and the diagonal dominance of discretized equations in the computation. The proposed numerical methods are successfully applied to dual-continuum gas reservoir simulation. Mass conservation is satisfied while the computation becomes robust. Numerical results show that the location of the production well relative to the large-permeability region is very sensitive to the production efficiency. It decreases apparently when the production well is moved from the large-permeability region to the small-permeability region, even though the well is very close to the interface of the two regions. The production well is suggested to be placed inside the large-permeability region regardless of the specific position.

  9. Assessing methods for dealing with treatment switching in randomised controlled trials: a simulation study

    Directory of Open Access Journals (Sweden)

    Latimer Nicholas

    2011-01-01

    Full Text Available Abstract Background We investigate methods used to analyse the results of clinical trials with survival outcomes in which some patients switch from their allocated treatment to another trial treatment. These included simple methods which are commonly used in medical literature and may be subject to selection bias if patients switching are not typical of the population as a whole. Methods which attempt to adjust the estimated treatment effect, either through adjustment to the hazard ratio or via accelerated failure time models, were also considered. A simulation study was conducted to assess the performance of each method in a number of different scenarios. Results 16 different scenarios were identified which differed by the proportion of patients switching, underlying prognosis of switchers and the size of true treatment effect. 1000 datasets were simulated for each of these and all methods applied. Selection bias was observed in simple methods when the difference in survival between switchers and non-switchers were large. A number of methods, particularly the AFT method of Branson and Whitehead were found to give less biased estimates of the true treatment effect in these situations. Conclusions Simple methods are often not appropriate to deal with treatment switching. Alternative approaches such as the Branson & Whitehead method to adjust for switching should be considered.

  10. SiO2-Ta2O5 sputtering yields: simulated and experimental results

    International Nuclear Information System (INIS)

    Vireton, E.; Ganau, P.; Mackowski, J.M.; Michel, C.; Pinard, L.; Remillieux, A.

    1994-09-01

    To improve mirrors coating, we have modeled sputtering of binary oxide targets using TRIM code. First, we have proposed a method to calculate TRIM input parameters using on the one hand thermodynamic cycle and on the other hand Malherbe's results. Secondly, an iterative processing has provided for oxide steady targets caused by ionic bombardment. Thirdly, we have exposed a model to get experimental sputtering yields. Fourthly, for (Ar - SiO 2 ) pair, we have determined that steady target is a silica one. A good agreement between simulated and experimental yields versus ion incident angle has been found. For (Ar - Ta 2 O 5 ) pair, we have to introduce preferential sputtering concept to explain discrepancy between simulation and experiment. In this case, steady target is tantalum monoxide. For (Ar - Ta(+O 2 ) pair, tantalum sputtered by argon ions in reactive oxygen atmosphere, we have to take into account new concept of oxidation stimulated by ion beam. We have supposed that tantalum target becomes a Ta 2 O 5 one in reactive oxygen atmosphere. Then, following mechanism is similar to previous pair. We have obtained steady target of tantalum monoxide too. Comparison between simulated and experimental sputtering yields versus ion incident angle has given very good agreement. By simulation, we have found that tantalum monoxide target has at least 15 angstrom thickness. Those results are compatible with Malherbe's and Taglauer's ones. (authors)

  11. Spectral methods in numerical plasma simulation

    International Nuclear Information System (INIS)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)

  12. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  13. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  14. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    International Nuclear Information System (INIS)

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-01-01

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75 F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: (1) stored outside in a 250-gallon tote, (2) stored inside in a gallon plastic bottle, (3) stored inside in a well mixed 5-L tank, and (4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  15. Lagrangian numerical methods for ocean biogeochemical simulations

    Science.gov (United States)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  16. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    Science.gov (United States)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from

  17. A fluid-solid coupling simulation method for convection heat transfer coefficient considering the under-vehicle condition

    Science.gov (United States)

    Tian, C.; Weng, J.; Liu, Y.

    2017-11-01

    The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.

  18. Clinical simulation as an evaluation method in health informatics

    DEFF Research Database (Denmark)

    Jensen, Sanne

    2016-01-01

    Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...

  19. Vectorization of a particle simulation method for hypersonic rarefied flow

    Science.gov (United States)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  20. Vectorization of a particle simulation method for hypersonic rarefied flow

    International Nuclear Information System (INIS)

    Mcdonald, J.D.; Baganoff, D.

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references

  1. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    Science.gov (United States)

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  2. Numerical simulation of single bubbles rising through subchannels with interface tracking method

    International Nuclear Information System (INIS)

    Hiroyuki Yoshida; Takuji Nagayoshi; Hidesada Tamai; Tazuyuki Takase; Hajime Akimoto

    2005-01-01

    Full text of publication follows: Although the sub-channel codes are used for the thermal-hydraulic analysis of fuel bundles in nuclear reactors from the former, many compositions and empirical equations based on experimental results are needed to predict the two-phase flow behavior in details. When there are no experimental data such as the reduced-moderation light water reactor (RMWR) which is studied by the Japan Atomic Energy Research Institute (JAERI), therefore, it is very difficult to obtain highly precise predictions. The RMWR core has remarkably narrow gap spacing between fuel rods (i.e., around 1 mm) which are arranged at a triangular tight-lattice configuration. To evaluate the feasibility and to optimize the thermal design of the RMWR core, a full-scale bundle test is required. However, several systematic full-scale tests are difficult to perform during an initial design phase from economic and temporal reason. Thus, we made a plan to develop a mechanistic BT model to evaluate the effects of the geometry configuration by a two-phase flow numerical simulation. In the plan of the mechanistic BT model development, three dimensional two-phase flow simulation codes with the interface tracking method, the moving particle semi-implicit method and the advanced two-fluid model are developed. In this study, as a part of this model development, detailed two-phase flow simulation code using interface tracking method (named TPFIT) is developed. In this paper, the results of TPFIT code with the advanced interface tracking method applied to single bubbles behavior through subchannels) to verify TPFIT code performance in complicated flow channel as rod bundles. In the simulation, the flow channel is composed of a square duct and four tubes with outside diameters D = 12 mm. The width and height of the duct are 27.2 mm and 192 mm, respectively. In the flow channel, the tubes are used to simulate fuel rods. One center subchannel and four periphery subchannels exist in the

  3. Numerical Simulation of Plasma Antenna with FDTD Method

    International Nuclear Information System (INIS)

    Chao, Liang; Yue-Min, Xu; Zhi-Jiang, Wang

    2008-01-01

    We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconBgurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design

  4. Numerical simulation of plasma antenna with FDTD method

    International Nuclear Information System (INIS)

    Liang Chao; Xu Yuemin; Wang Zhijiang

    2008-01-01

    We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconfigurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design. (authors)

  5. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  6. Simulation methods supporting homologation of Electronic Stability Control in vehicle variants

    Science.gov (United States)

    Lutz, Albert; Schick, Bernhard; Holzmann, Henning; Kochem, Michael; Meyer-Tuve, Harald; Lange, Olav; Mao, Yiqin; Tosolin, Guido

    2017-10-01

    Vehicle simulation has a long tradition in the automotive industry as a powerful supplement to physical vehicle testing. In the field of Electronic Stability Control (ESC) system, the simulation process has been well established to support the ESC development and application by suppliers and Original Equipment Manufacturers (OEMs). The latest regulation of the United Nations Economic Commission for Europe UN/ECE-R 13 allows also for simulation-based homologation. This extends the usage of simulation from ESC development to homologation. This paper gives an overview of simulation methods, as well as processes and tools used for the homologation of ESC in vehicle variants. The paper first describes the generic homologation process according to the European Regulation (UN/ECE-R 13H, UN/ECE-R 13/11) and U.S. Federal Motor Vehicle Safety Standard (FMVSS 126). Subsequently the ESC system is explained as well as the generic application and release process at the supplier and OEM side. Coming up with the simulation methods, the ESC development and application process needs to be adapted for the virtual vehicles. The simulation environment, consisting of vehicle model, ESC model and simulation platform, is explained in detail with some exemplary use-cases. In the final section, examples of simulation-based ESC homologation in vehicle variants are shown for passenger cars, light trucks, heavy trucks and trailers. This paper is targeted to give a state-of-the-art account of the simulation methods supporting the homologation of ESC systems in vehicle variants. However, the described approach and the lessons learned can be used as reference in future for an extended usage of simulation-supported releases of the ESC system up to the development and release of driver assistance systems.

  7. A Simulation Method Measuring Psychomotor Nursing Skills.

    Science.gov (United States)

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  8. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of); Hur, Sam Suk [Sam Yong Inspection Engineering Co., Ltd., Seoul (Korea, Republic of)

    2016-11-15

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the

  9. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    International Nuclear Information System (INIS)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong; Hur, Sam Suk

    2016-01-01

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the safety goal

  10. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    International Nuclear Information System (INIS)

    Xiang, Hao; Chen, Bin

    2015-01-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ  = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We 0.28 Fr 0.78 (We is the Weber number, Fr is the Froude number). (paper)

  11. Application of accelerated simulation method on NPN bipolar transistors of different technology

    International Nuclear Information System (INIS)

    Fei Wuxiong; Zheng Yuzhan; Wang Yiyuan; Chen Rui; Li Maoshun; Lan Bo; Cui Jiangwei; Zhao Yun; Lu Wu; Ren Diyuan; Wang Zhikuan; Yang Yonghui

    2010-01-01

    With different radiation methods, ionizing radiation response of NPN bipolar transistors of six different processes was investigated. The results show that the enhanced low dose rate sensitivity obviously exists in NPN bipolar transistors of the six kinds of processes. According to the experiment, the damage of decreasing temperature in step during irradiation is obviously greater than the result of irradiated at high dose rate. This irradiation method can perfectly simulate and conservatively evaluate low dose rate damage, which is of great significance to radiation effects research of bipolar devices. Finally, the mechanisms of the experimental phenomena were analyzed. (authors)

  12. INFLUENCE OF RIVER BED ELEVATION SURVEY CONFIGURATIONS AND INTERPOLATION METHODS ON THE ACCURACY OF LIDAR DTM-BASED RIVER FLOW SIMULATIONS

    Directory of Open Access Journals (Sweden)

    J. R. Santillan

    2016-09-01

    Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs

  13. Peach Bottom Turbine Trip Simulations with RETRAN Using INER/TPC BWR Transient Analysis Method

    International Nuclear Information System (INIS)

    Kao Lainsu; Chiang, Show-Chyuan

    2005-01-01

    The work described in this paper is benchmark calculations of pressurization transient turbine trip tests performed at the Peach Bottom boiling water reactor (BWR). It is part of an overall effort in providing qualification basis for the INER/TPC BWR transient analysis method developed for the Kuosheng and Chinshan plants. The method primarily utilizes an advanced system thermal hydraulics code, RETRAN02/MOD5, for transient safety analyses. Since pressurization transients would result in a strong coupling effect between core neutronic and system thermal hydraulics responses, the INER/TPC method employs the one-dimensional kinetic model in RETRAN with a cross-section data library generated by the Studsvik-CMS code package for the transient calculations. The Peach Bottom Turbine Trip (PBTT) tests, including TT1, TT2, and TT3, have been successfully performed in the plant and assigned as standards commonly for licensing method qualifications for years. It is an essential requirement for licensing purposes to verify integral capabilities and accuracies of the codes and models of the INER/TPC method in simulating such pressurization transients. Specific Peach Bottom plant models, including both neutronics and thermal hydraulics, are developed using modeling approaches and experiences generally adopted in the INER/TPC method. Important model assumptions in RETRAN for the PBTT test simulations are described in this paper. Simulation calculations are performed with best-estimated initial and boundary conditions obtained from plant test measurements. The calculation results presented in this paper demonstrate that the INER/TPC method is capable of calculating accurately the core and system transient behaviors of the tests. Excellent agreement, both in trends and magnitudes between the RETRAN calculation results and the PBTT measurements, shows reliable qualifications of the codes/users/models involved in the method. The RETRAN calculated peak neutron fluxes of the PBTT

  14. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    Science.gov (United States)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  17. Analysis of Plane-Parallel Electron Beam Propagation in Different Media by Numerical Simulation Methods

    Science.gov (United States)

    Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.

    2018-04-01

    Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.

  18. Simulation for light extraction in light emitting diode using finite domain time difference method

    International Nuclear Information System (INIS)

    Hong, Jun Hee; Park, Si Hyun

    2008-01-01

    InGaN based LEDs are indispensable to traffic light, full color displays, back lights in liquid crystals, and general lighting. The demand for high efficiency LEDs is on the increase. Recently we have reported the improvement of the light extraction efficiency of InGaN based LED. In this paper we show suitable a three dimensional (3 D)FDTD simulation method for LED simulation and we apply our FDTD simulation to our PNS LED structures, comparing the simulation results with the experimental results. For real FDTD simulation, we first must consider the spatial and temporal grid size. In order to obtain an accurate result, the spatial grid size must be so small that the feature of the field can be resolved. We computed the field power at each time at the surface 0.3mm away from the surface between GaN and air and integrate over surface. The calculations were conducted for the PNS LEDs employing the different height of SiO_2 columns, that is, h=160nm, h=350nm, h=550nm, h=750nm, and h=950nm. Simulation results according to different height is shown in Fig. 1(a,b). All simulation curves follow rough trend that it increases with column height and reaches the maximum at about 600nm height and then decreases with height. And this is a consistent with the trend from our experiments. Our FDTD simulation gives a possibility for design of LED structures of high extraction efficiency

  19. Comparison of surface mass balance of ice sheets simulated by positive-degree-day method and energy balance approach

    Directory of Open Access Journals (Sweden)

    E. Bauer

    2017-07-01

    Full Text Available Glacial cycles of the late Quaternary are controlled by the asymmetrically varying mass balance of continental ice sheets in the Northern Hemisphere. Surface mass balance is governed by processes of ablation and accumulation. Here two ablation schemes, the positive-degree-day (PDD method and the surface energy balance (SEB approach, are compared in transient simulations of the last glacial cycle with the Earth system model of intermediate complexity CLIMBER-2. The standard version of the CLIMBER-2 model incorporates the SEB approach and simulates ice volume variations in reasonable agreement with paleoclimate reconstructions during the entire last glacial cycle. Using results from the standard CLIMBER-2 model version, we simulated ablation with the PDD method in offline mode by applying different combinations of three empirical parameters of the PDD scheme. We found that none of the parameter combinations allow us to simulate a surface mass balance of the American and European ice sheets that is similar to that obtained with the standard SEB method. The use of constant values for the empirical PDD parameters led either to too much ablation during the first phase of the last glacial cycle or too little ablation during the final phase. We then substituted the standard SEB scheme in CLIMBER-2 with the PDD scheme and performed a suite of fully interactive (online simulations of the last glacial cycle with different combinations of PDD parameters. The results of these simulations confirmed the results of the offline simulations: no combination of PDD parameters realistically simulates the evolution of the ice sheets during the entire glacial cycle. The use of constant parameter values in the online simulations leads either to a buildup of too much ice volume at the end of glacial cycle or too little ice volume at the beginning. Even when the model correctly simulates global ice volume at the last glacial maximum (21 ka, it is unable to simulate

  20. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  1. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  2. Multilevel panel method for wind turbine rotor flow simulations

    NARCIS (Netherlands)

    van Garrel, Arne

    2016-01-01

    Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering

  3. Simulation methods of nuclear electromagnetic pulse effects in integrated circuits

    International Nuclear Information System (INIS)

    Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen

    2013-01-01

    In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)

  4. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  5. Conventional and digital radiographic methods in the detection of simulated external root resorptions: A comparative study

    Directory of Open Access Journals (Sweden)

    C J Sanjay

    2009-01-01

    Full Text Available Objective : To evaluate and compare the efficacy of conventional and digital radiographic methods in the detection of simulated external root resorption cavities and also to evaluate whether the detectability was influenced by resorption cavity sizes. Methods : Thirty-two selected teeth from human dentate mandibles were radiographed in orthoradial, mesioradial and distoradial aspect using conventional film (Insight Kodak F-speed; Eastman Kodak, Rochester, NY and a digital sensor (Trophy RVG advanced imaging system with 0.7mm and 1.0mm deep cavities prepared on their vestibular, mesial and distal surfaces at the cervical, middle and apical thirds. Three dental professionals, an endodontist, a radiologist and a general practitioner, evaluated the images twice with a one-week time interval. Results : No statistical significance was seen in the first observation for both conventional and digital radiographic methods in the detection of simulated external root resorptions and for small and medium cavities but statistical difference was noted in the second observation (P< 0.001 for both the methods. Conclusion : Considering the methodology and the overall results, conventional radiographic method (F-speed performed slightly better than the digital radiographic method in the detection of simulated radiographic method but better consistency was seen with the digital system. Overall size of the resorption cavity had no influence on the performance of both methods and suggests that initial external root resorption lesion is not well-appreciated with both the methods as compared to the advanced lesion.

  6. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  7. Three-dimensional discrete element method simulation of core disking

    Science.gov (United States)

    Wu, Shunchuan; Wu, Haoyan; Kemeny, John

    2018-04-01

    The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.

  8. Simulation of galvanic corrosion using boundary element method

    International Nuclear Information System (INIS)

    Zaifol Samsu; Muhamad Daud; Siti Radiah Mohd Kamaruddin; Nur Ubaidah Saidin; Abdul Aziz Mohamed; Mohd Saari Ripin; Rusni Rejab; Mohd Shariff Sattar

    2011-01-01

    Boundary element method (BEM) is a numerical technique that used for modeling infinite domain as is the case for galvanic corrosion analysis. The use of boundary element analysis system (BEASY) has allowed cathodic protection (CP) interference to be assessed in terms of the normal current density, which is directly proportional to the corrosion rate. This paper was present the analysis of the galvanic corrosion between Aluminium and Carbon Steel in natural sea water. The result of experimental was validated with computer simulation like BEASY program. Finally, it can conclude that the BEASY software is a very helpful tool for future planning before installing any structure, where it gives the possible CP interference on any nearby unprotected metallic structure. (Author)

  9. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    Science.gov (United States)

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  10. A fast mollified impulse method for biomolecular atomistic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)

    2017-03-15

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  11. Research of Monte Carlo method used in simulation of different maintenance processes

    International Nuclear Information System (INIS)

    Zhao Siqiao; Liu Jingquan

    2011-01-01

    The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)

  12. Detailed Simulation of Complex Hydraulic Problems with Macroscopic and Mesoscopic Mathematical Methods

    Directory of Open Access Journals (Sweden)

    Chiara Biscarini

    2013-01-01

    Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.

  13. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  14. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process

    International Nuclear Information System (INIS)

    Nishimura, Akihiko

    1995-01-01

    The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)

  15. Numerical simulation of direct methanol fuel cells using lattice Boltzmann method

    Energy Technology Data Exchange (ETDEWEB)

    Delavar, Mojtaba Aghajani; Farhadi, Mousa; Sedighi, Kurosh [Faculty of Mechanical Engineering, Babol University of Technology, Babol, P.O. Box 484 (Iran)

    2010-09-15

    In this study Lattice Boltzmann Method (LBM) as an alternative of conventional computational fluid dynamics method is used to simulate Direct Methanol Fuel Cell (DMFC). A two dimensional lattice Boltzmann model with 9 velocities, D2Q9, is used to solve the problem. The computational domain includes all seven parts of DMFC: anode channel, catalyst and diffusion layers, membrane and cathode channel, catalyst and diffusion layers. The model has been used to predict the flow pattern and concentration fields of different species in both clear and porous channels to investigate cell performance. The results have been compared well with results in literature for flow in porous and clear channels and cell polarization curves of the DMFC at different flow speeds and feed methanol concentrations. (author)

  16. A piecewise-integration method for simulating the influence of external forcing on climate

    Institute of Scientific and Technical Information of China (English)

    Zhifu Zhang; Chongjian Qiu; Chenghai Wang

    2008-01-01

    Climate drift occurs in most general circulation models (GCMs) as a result of incomplete physical and numerical representation of the complex climate system,which may cause large uncertainty in sensitivity experiments evaluating climate response to changes in external forcing.To solve this problem,we propose a piecewise-integration method to reduce the systematic error in climate sensitivity studies.The observations are firstly assimilated into a numerical model by using the dynamic relaxation technique to relax to the current state of atmosphere,and then the assimilated fields are continuously used to reinitialize the simulation to reduce the error of climate simulation.When the numerical model is integrated with changed external forcing,the results can be split into two parts,background and perturbation fields,and the background is the state before the external forcing is changed.The piecewise-integration method is used to continuously reinitialize the model with the assimilated field,instead of the background.Therefore,the simulation error of the model with the external forcing can be reduced.In this way,the accuracy of climate sensitivity experiments is greatly improved.Tests with a simple low-order spectral model show that this approach can significantly reduce the uncertainty of climate sensitivity experiments.

  17. Utilisation of simulation in industrial design and resulting business opportunities (SISU) - MASIT18

    Energy Technology Data Exchange (ETDEWEB)

    Olin, M.; Leppaevuori, J.; Manninen, J. (VTT Technical Research Centre of Finland, Espoo (Finland)); Valli, A.; Hasari, H.; Koistinen, A.; Leppaenen, S. (Helsinki Polytechnic Stadia, City of Helsinki, Helsinki (Finland)); Lahti, S. (EVTEK University of Applied Sciences, Vantaa (Finland))

    2008-07-01

    In the SISU project, over 10 case studies are carried out in many different fields and applications. Results and experience of developing simulation applications have started to accumulate. One of the most important results this far is that there are many common features, both good and bad, between our test cases. Simulation is a fast, reliable, and often low risk method of studying different systems and processes. On the other hand, many applications need very expensive licences, plenty of parametric data and highly specialised knowledge in order to produce really valuable results. Industrial partners are acting like real customers in the case studies. We hope that this methodology will help us to answer our main question: how do we create a value chain from model development via model application for end users? The best thing to happen will be if partners learn to apply simulation productively. Other scientists and companies will follow, and new value chains will mushroom. In the case study of Mamec and EVTEK - Mixing model - the aim is to develop a fluid mechanical model for a mixing chamber. This study is similar to the preceding case of Watrec. In this study, the main problems have been in material properties area, because of non-Newtonian fluids and multiphase flows. Material property parameters of the non-Newtonian power law have been defined and flow field simulations have started. In the case study of Fortum and EVTEK - MDR - Measurement data reconciliation - the aim is to apply MDR in a power plant environment and study the possibility of developing a commercial additional tool for power plant simulation through the well-proven MDR technique based on linear filtering theory. The MDR method has been applied, for example, to energy and chemical processes. MDR is closely connected with system maintenance, simulation pre-processing and process diagnostics. Experimental work has proceeded from simple unit processes to large and complicated process systems. One

  18. Test of Shi et al. Method to Infer the Magnetic Reconnection Geometry from Spacecraft Data: MHD Simulation with Guide Field and Antiparallel Kinetic Simulation

    Science.gov (United States)

    Denton, R.; Sonnerup, B. U. O.; Swisdak, M.; Birn, J.; Drake, J. F.; Heese, M.

    2012-01-01

    When analyzing data from an array of spacecraft (such as Cluster or MMS) crossing a site of magnetic reconnection, it is desirable to be able to accurately determine the orientation of the reconnection site. If the reconnection is quasi-two dimensional, there are three key directions, the direction of maximum inhomogeneity (the direction across the reconnection site), the direction of the reconnecting component of the magnetic field, and the direction of rough invariance (the "out of plane" direction). Using simulated spacecraft observations of magnetic reconnection in the geomagnetic tail, we extend our previous tests of the direction-finding method developed by Shi et al. (2005) and the method to determine the structure velocity relative to the spacecraft Vstr. These methods require data from four proximate spacecraft. We add artificial noise and calibration errors to the simulation fields, and then use the perturbed gradient of the magnetic field B and perturbed time derivative dB/dt, as described by Denton et al. (2010). Three new simulations are examined: a weakly three-dimensional, i.e., quasi-two-dimensional, MHD simulation without a guide field, a quasi-two-dimensional MHD simulation with a guide field, and a two-dimensional full dynamics kinetic simulation with inherent noise so that the apparent minimum gradient was not exactly zero, even without added artificial errors. We also examined variations of the spacecraft trajectory for the kinetic simulation. The accuracy of the directions found varied depending on the simulation and spacecraft trajectory, but all the directions could be found within about 10 for all cases. Various aspects of the method were examined, including how to choose averaging intervals and the best intervals for determining the directions and velocity. For the kinetic simulation, we also investigated in detail how the errors in the inferred gradient directions from the unmodified Shi et al. method (using the unperturbed gradient

  19. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  20. Comparison between the performance of some KEK-klystrons and simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, Shigeki [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan)

    1997-04-01

    Recent developments of various klystron simulation codes have enabled us to realistically design klystrons. This paper presents various simulation results using the FCI code and the performances of tubes manufactured based on this code. Upgrading a 30-MW S-band klystron and developing a 50-MW S-band klystron for the KEKB projects are successful examples based on FCI-code predictions. Mass-productions of these tubes have already started. On the other hand, a discrepancy has been found between the FCI simulation results and the performance of real tubes. In some cases, the simulation results lead to high-efficiency results, while manufactured tubes show the usual value, or a lower value, of the efficiency. One possible cause may come from a data mismatch between the electron-gun simulation and the input data set of the FCI code for the gun region. This kind of discrepancy has been observed in 30-MW S-band pulsed tubes, sub-booster pulsed tubes and L-band high-duty pulsed klystrons. Sometimes, JPNDSK (one-dimensional disk-model code) gives similar results. Some examples using the FCI code are given in this article. An Arsenal-MSU code could be applied to the 50-MW klystron under collaboration with Moscow State University; a good agreement has been found between the prediction of the code and performance. (author)

  1. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Science.gov (United States)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  2. First results from simulations of supersymmetric lattices

    Science.gov (United States)

    Catterall, Simon

    2009-01-01

    We conduct the first numerical simulations of lattice theories with exact supersymmetry arising from the orbifold constructions of \\cite{Cohen:2003xe,Cohen:2003qw,Kaplan:2005ta}. We consider the Script Q = 4 theory in D = 0,2 dimensions and the Script Q = 16 theory in D = 0,2,4 dimensions. We show that the U(N) theories do not possess vacua which are stable non-perturbatively, but that this problem can be circumvented after truncation to SU(N). We measure the distribution of scalar field eigenvalues, the spectrum of the fermion operator and the phase of the Pfaffian arising after integration over the fermions. We monitor supersymmetry breaking effects by measuring a simple Ward identity. Our results indicate that simulations of Script N = 4 super Yang-Mills may be achievable in the near future.

  3. Study on the Growth of Holes in Cold Spraying via Numerical Simulation and Experimental Methods

    Directory of Open Access Journals (Sweden)

    Guosheng Huang

    2016-12-01

    Full Text Available Cold spraying is a promising method for rapid prototyping due to its high deposition efficiency and high-quality bonding characteristic. However, many researchers have noticed that holes cannot be replenished and will grow larger and larger once formed, which will significantly decrease the deposition efficiency. No work has yet been done on this problem. In this paper, a computational simulation method was used to investigate the origins of these holes and the reasons for their growth. A thick copper coating was deposited around the pre-drilled, micro-size holes using a cold spraying method on copper substrate to verify the simulation results. The results indicate that the deposition efficiency inside the hole decreases as the hole become deeper and narrower. The repellant force between the particles perpendicular to the impaction direction will lead to porosity if the particles are too close. There is a much lower flattening ratio for successive particles if they are too close at the same location, because the momentum energy contributes to the former particle’s deformation. There is a high probability that the above two phenomena, resulting from high powder-feeding rate, will form the original hole, which will grow larger and larger once it is formed. It is very important to control the powder feeding rate, but the upper limit is yet to be determined by further simulation and experimental investigation.

  4. A hybrid transport-diffusion Monte Carlo method for frequency-dependent radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2012-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.

  5. Adaptive mesh refinement and adjoint methods in geophysics simulations

    Science.gov (United States)

    Burstedde, Carsten

    2013-04-01

    required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.

  6. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  7. A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations

    Directory of Open Access Journals (Sweden)

    Mingyuan Hu

    2015-01-01

    Full Text Available Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment, and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1 spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2 multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3 dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic

  8. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    Directory of Open Access Journals (Sweden)

    Xueli Chen

    2010-01-01

    Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

  9. IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2014-09-01

    Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.

  10. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    Science.gov (United States)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  11. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  12. Comparison of estimation and simulation methods for modeling block 1 of anomaly no.3 in Narigan Uranium mineral deposit

    International Nuclear Information System (INIS)

    Jamali Esfahlan, D.; Madani, H.

    2011-01-01

    Geostatistical methods are applied for modeling the mineral deposits at the final stage of the detailed exploration. By applying the results of these models, the technical and economic feasibility studies are conducted for the deposits. The geostatistical modeling methods are usually consist of estimation and simulation methods. The estimation techniques, such as Kriging, construct spatial relation (geological continuation model) between data, by providing the best unique guesses for unknown features. However, when applying this technique for a grid of drill-holes over a deposit, an obvious discrepancy exists between the real geological features and the Kriging estimation map. Because of the limited number of sampled data applied for Kriging, it could not appear as the same as the real features. Also the spatial continuity estimated by the Kriging maps, are smoother than the real unknown features. On the other hand, the objective of simulation is to provide some functions or sets of variable values, to be compatible with the existing information. This means that the simulated values have an average and the variance similar to the raw data and may even be the same as the measurements. we studied the Anomaly No.3 of Narigan uranium mineral deposit, located in the central Iran region and applied the Kriging estimation and the sequential Gaussian simulation methods, and finally by comparing the results we concluded that the Kriging estimation method is more reliable for long term planning of a mine. Because of the reconstructing random structures, the results of the simulation methods indicate that they could also be applied for short term planning in mine exploitation.

  13. Fusing Simulation Results From Multifidelity Aero-servo-elastic Simulators - Application To Extreme Loads On Wind Turbine

    DEFF Research Database (Denmark)

    Abdallah, Imad; Sudret, Bruno; Lataniotis, Christos

    2015-01-01

    Fusing predictions from multiple simulators in the early stages of the conceptual design of a wind turbine results in reduction in model uncertainty and risk mitigation. Aero-servo-elastic is a term that refers to the coupling of wind inflow, aerodynamics, structural dynamics and controls. Fusing...... the response data from multiple aero-servo-elastic simulators could provide better predictive ability than using any single simulator. The co-Kriging approach to fuse information from multifidelity aero-servo-elastic simulators is presented. We illustrate the co-Kriging approach to fuse the extreme flapwise...... bending moment at the blade root of a large wind turbine as a function of wind speed, turbulence and shear exponent in the presence of model uncertainty and non-stationary noise in the output. The extreme responses are obtained by two widely accepted numerical aero-servo-elastic simulators, FAST...

  14. To improve training methods in an engine room simulator-based training

    OpenAIRE

    Lin, Chingshin

    2016-01-01

    The simulator based training are used widely in both industry and school education to reduce the accidents nowadays. This study aims to suggest the improved training methods to increase the effectiveness of engine room simulator training. The effectiveness of training in engine room will be performance indicators and the self-evaluation by participants. In the first phase of observation, the aim is to find out the possible shortcomings of current training methods based on train...

  15. Theoretical simulation of the dual-heat-flux method in deep body temperature measurements.

    Science.gov (United States)

    Huang, Ming; Chen, Wenxi

    2010-01-01

    Deep body temperature reveals individual physiological states, and is important in patient monitoring and chronobiological studies. An innovative dual-heat-flux method has been shown experimentally to be competitive with the conventional zero-heat-flow method in its performance, in terms of measurement accuracy and step response to changes in the deep temperature. We have utilized a finite element method to model and simulate the dynamic process of a dual-heat-flux probe in deep body temperature measurements to validate the fundamental principles of the dual-heat-flux method theoretically, and to acquire a detailed quantitative description of the thermal profile of the dual-heat-flux probe. The simulation results show that the estimated deep body temperature is influenced by the ambient temperature (linearly, at a maximum rate of 0.03 °C/°C) and the blood perfusion rate. The corresponding depth of the estimated temperature in the skin and subcutaneous tissue layer is consistent when using the dual-heat-flux probe. Insights in improving the performance of the dual-heat-flux method were discussed for further studies of dual-heat-flux probes, taking into account structural and geometric considerations.

  16. Cathodic protection simulation of above ground storage tank bottom: Experimental and numerical results

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Marcelo [Inspection Department, Rio de Janeiro Refinery - REDUC, Petrobras, Rio de Janeiro (Brazil); Brasil, Simone L.D.C. [Chemistry School, Federal University of Rio de Janeiro, UFRJ, Rio de Janeiro (Brazil); Baptista, Walmar [Corrosion Department, Research Centre - CENPES, Petrobras (Brazil); Miranda, Luiz de [Materials and Metallurgical Engineering Program, COPPE, UFRJ, Rio de Janeiro (Brazil); Brito, Rosane F. [Corrosion Department, Research Centre, CENPES, Petrobras, Rio de Janeiro (Brazil)

    2004-07-01

    The deterioration history of Above ground Storage Tanks (AST) of Petrobras' refineries - shows that the great incidence of corrosion in the AST bottom is at the external side. This is a problem in the disposability of storage crude oil and other final products. At this refinery, all AST's are built over a concrete base with a lot of pile to support the structure and distribute the charge homogeneously. Because of this it is very difficult to use cathodic protection as an anti-corrosive method for each one of these tanks. This work presents an alternative cathodic protection system to protect the external side of the tank bottom using a new metallic bottom, placed at different distance from the original one. The space between the two bottoms was filled with one of two kinds of soils, sand or clay, more conductive than the concrete. Using a prototype tank it was studied the potential distributions over the new tank bottom for different system parameters, as soil resistivity, number and position of anodes localized in the old bottom. These experimental results were compared to numerical simulations, carried out using a software based on the Boundary Element Method. The computer simulation validates this protection method, confirming to be a very useful tool to define the optimized cathodic protection system configuration. (authors)

  17. Application of the Hybrid Simulation Method for the Full-Scale Precast Reinforced Concrete Shear Wall Structure

    Directory of Open Access Journals (Sweden)

    Zaixian Chen

    2018-02-01

    Full Text Available The hybrid simulation (HS testing method combines physical test and numerical simulation, and provides a viable alternative to evaluate the structural seismic performance. Most studies focused on the accuracy, stability and reliability of the HS method in the small-scale tests. It is a challenge to evaluate the seismic performance of a twelve-story pre-cast reinforced concrete shear-wall structure using this HS method which takes the full-scale bottom three-story structural model as the physical substructure and the elastic non-linear model as the numerical substructure. This paper employs an equivalent force control (EFC method with implicit integration algorithm to deal with the numerical integration of the equation of motion (EOM and the control of the loading device. Because of the arrangement of the test model, an elastic non-linear numerical model is used to simulate the numerical substructure. And non-subdivision strategy for the displacement inflection point of numerical substructure is used to easily realize the simulation of the numerical substructure and thus reduce the measured error. The parameters of the EFC method are calculated basing on analytical and numerical studies and used to the actual full-scale HS test. Finally, the accuracy and feasibility of the EFC-based HS method is verified experimentally through the substructure HS tests of the pre-cast reinforced concrete shear-wall structure model. And the testing results of the descending stage can be conveniently obtained from the EFC-based HS method.

  18. A heuristic method for simulating open-data of arbitrary complexity that can be used to compare and evaluate machine learning methods.

    Science.gov (United States)

    Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S

    2018-01-01

    A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.

  19. Meshfree simulation of avalanches with the Finite Pointset Method (FPM)

    Science.gov (United States)

    Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios

    2017-04-01

    Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.

  20. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  1. Spectrally constrained NIR tomography for breast imaging: simulations and clinical results

    Science.gov (United States)

    Srinivasan, Subhadra; Pogue, Brian W.; Jiang, Shudong; Dehghani, Hamid; Paulsen, Keith D.

    2005-04-01

    A multi-spectral direct chromophore and scattering reconstruction for frequency domain NIR tomography has been implemented using constraints of the known molar spectra of the chromophores and a Mie theory approximation for scattering. This was tested in a tumor-simulating phantom containing an inclusion with higher hemoglobin, lower oxygenation and contrast in scatter. The recovered images were quantitatively accurate and showed substantial improvement over existing methods; and in addition, showed robust results tested for up to 5% noise in amplitude and phase measurements. When applied to a clinical subject with fibrocystic disease, the tumor was visible in hemoglobin and water, but no decrease in oxygenation was observed, making oxygen saturation, a potential diagnostic indicator.

  2. Rainout assessment: the ACRA system and summaries of simulation results

    International Nuclear Information System (INIS)

    Watson, C.W.; Barr, S.; Allenson, R.E.

    1977-09-01

    A generalized, three-dimensional, integrated computer code system was developed to estimate collateral-damage threats from precipitation-scavenging (rainout) of airborne debris-clouds from defensive tactical nuclear engagements. This code system, called ACRA for Atmospheric-Contaminant Rainout Assessment, is based on Monte Carlo statistical simulation methods that allow realistic, unbiased simulations of probabilistic storm, wind, and precipitation fields that determine actual magnitudes and probabilities of rainout threats. Detailed models (or data bases) are included for synoptic-scale storm and wind fields; debris transport and dispersal (with the roles of complex flow fields, time-dependent diffusion, and multidimensional shear effects accounted for automatically); microscopic debris-precipitation interactions and scavenging probabilities; air-to-ground debris transport; local demographic features, for assessing actual threats to populations; and nonlinear effects accumulations from multishot scenarios. We simulated several hundred representative shots for West European scenarios and climates to study single-shot and multishot sensitivities of rainout effects to variations in pertinent physical variables

  3. Simulation methods to estimate design power: an overview for applied research.

    Science.gov (United States)

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  4. Viscosity of dilute suspensions of rodlike particles: A numerical simulation method

    Science.gov (United States)

    Yamamoto, Satoru; Matsuoka, Takaaki

    1994-02-01

    The recently developed simulation method, named as the particle simulation method (PSM), is extended to predict the viscosity of dilute suspensions of rodlike particles. In this method a rodlike particle is modeled by bonded spheres. Each bond has three types of springs for stretching, bending, and twisting deformation. The rod model can therefore deform by changing the bond distance, bond angle, and torsion angle between paired spheres. The rod model can represent a variety of rigidity by modifying the bond parameters related to Young's modulus and the shear modulus of the real particle. The time evolution of each constituent sphere of the rod model is followed by molecular-dynamics-type approach. The intrinsic viscosity of a suspension of rodlike particles is derived from calculating an increased energy dissipation for each sphere of the rod model in a viscous fluid. With and without deformation of the particle, the motion of the rodlike particle was numerically simulated in a three-dimensional simple shear flow at a low particle Reynolds number and without Brownian motion of particles. The intrinsic viscosity of the suspension of rodlike particles was investigated on orientation angle, rotation orbit, deformation, and aspect ratio of the particle. For the rigid rodlike particle, the simulated rotation orbit compared extremely well with theoretical one which was obtained for a rigid ellipsoidal particle by use of Jeffery's equation. The simulated dependence of the intrinsic viscosity on various factors was also identical with that of theories for suspensions of rigid rodlike particles. For the flexible rodlike particle, the rotation orbit could be obtained by the particle simulation method and it was also cleared that the intrinsic viscosity decreased as occurring of recoverable deformation of the rodlike particle induced by flow.

  5. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    Science.gov (United States)

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  6. Determinant method and quantum simulations of many-body effects in a single impurity Anderson model

    International Nuclear Information System (INIS)

    Gubernatis, J.E.; Olson, T.; Scalapino, D.J.; Sugar, R.L.

    1985-01-01

    A short description is presented of a quantum Monte Carlo technique, often referred to as the determinant method, that has proved useful for simulating many-body effects in systems of interacting fermions at finite temperatures. Preliminary results using this technique on a single impurity Anderson model are reported. Examples of such many-body effects as local moment formation, Kondo behavior, and mixed valence phenomena found in the simulations are shown. 10 refs., 3 figs

  7. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    Science.gov (United States)

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  8. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  9. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  10. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    International Nuclear Information System (INIS)

    Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu

    2011-01-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  11. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    Science.gov (United States)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  12. Simulation of neutral gas flow in a tokamak divertor using the Direct Simulation Monte Carlo method

    International Nuclear Information System (INIS)

    Gleason-González, Cristian; Varoutis, Stylianos; Hauer, Volker; Day, Christian

    2014-01-01

    Highlights: • Subdivertor gas flows calculations in tokamaks by coupling the B2-EIRENE and DSMC method. • The results include pressure, temperature, bulk velocity and particle fluxes in the subdivertor. • Gas recirculation effect towards the plasma chamber through the vertical targets is found. • Comparison between DSMC and the ITERVAC code reveals a very good agreement. - Abstract: This paper presents a new innovative scientific and engineering approach for describing sub-divertor gas flows of fusion devices by coupling the B2-EIRENE (SOLPS) code and the Direct Simulation Monte Carlo (DSMC) method. The present study exemplifies this with a computational investigation of neutral gas flow in the ITER's sub-divertor region. The numerical results include the flow fields and contours of the overall quantities of practical interest such as the pressure, the temperature and the bulk velocity assuming helium as model gas. Moreover, the study unravels the gas recirculation effect located behind the vertical targets, viz. neutral particles flowing towards the plasma chamber. Comparison between calculations performed by the DSMC method and the ITERVAC code reveals a very good agreement along the main sub-divertor ducts

  13. Simulation of acoustic streaming by means of the finite-difference time-domain method

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco

    2012-01-01

    Numerical simulations of acoustic streaming generated by a standing wave in a narrow twodimensional cavity are presented. In this case, acoustic streaming arises from the viscous boundary layers set up at the surfaces of the walls. It is known that streaming vortices inside the boundary layer have...... directions of rotation that are opposite to those of the outer streaming vortices (Rayleigh streaming). The general objective of the work described in this paper has been to study the extent to which it is possible to simulate both the outer streaming vortices and the inner boundary layer vortices using...... the finite-difference time-domain method. To simplify the problem, thermal effects are not considered. The motivation of the described investigation has been the possibility of using the numerical method to study acoustic streaming, particularly under non-steady conditions. Results are discussed for channels...

  14. A method of simulating and visualizing nuclear reactions

    International Nuclear Information System (INIS)

    Atwood, C.H.; Paul, K.M.

    1994-01-01

    Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown

  15. Lattice Boltzmann method used to simulate particle motion in a conduit

    Directory of Open Access Journals (Sweden)

    Dolanský Jindřich

    2017-06-01

    Full Text Available A three-dimensional numerical simulation of particle motion in a pipe with a rough bed is presented. The simulation based on the Lattice Boltzmann Method (LBM employs the hybrid diffuse bounce-back approach to model moving boundaries. The bed of the pipe is formed by stationary spherical particles of the same size as the moving particles. Particle movements are induced by gravitational and hydrodynamic forces. To evaluate the hydrodynamic forces, the Momentum Exchange Algorithm is used. The LBM unified computational frame makes it possible to simulate both the particle motion and the fluid flow and to study mutual interactions of the carrier liquid flow and particles and the particle–bed and particle–particle collisions. The trajectories of simulated and experimental particles are compared. The Particle Tracking method is used to track particle motion. The correctness of the applied approach is assessed.

  16. Numerical simulation methods for wave propagation through optical waveguides

    International Nuclear Information System (INIS)

    Sharma, A.

    1993-01-01

    The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs

  17. A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study.

    Science.gov (United States)

    Ahmed, Rami; Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott

    2016-03-16

    OBJECTIVE : The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential. A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. CONCLUSIONS : A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent.

  18. A novel method for energy harvesting simulation based on scenario generation

    Science.gov (United States)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  19. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

    Directory of Open Access Journals (Sweden)

    J. Tao

    2012-09-01

    Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.

  20. The hybridized Discontinuous Galerkin method for Implicit Large-Eddy Simulation of transitional turbulent flows

    Science.gov (United States)

    Fernandez, P.; Nguyen, N. C.; Peraire, J.

    2017-05-01

    We present a high-order Implicit Large-Eddy Simulation (ILES) approach for transitional aerodynamic flows. The approach encompasses a hybridized Discontinuous Galerkin (DG) method for the discretization of the Navier-Stokes (NS) equations, and a parallel preconditioned Newton-GMRES solver for the resulting nonlinear system of equations. The combination of hybridized DG methods with an efficient solution procedure leads to a high-order accurate NS solver that is competitive to alternative approaches, such as finite volume and finite difference codes, in terms of computational cost. The proposed approach is applied to transitional flows over the NACA 65-(18)10 compressor cascade and the Eppler 387 wing at Reynolds numbers up to 460,000. Grid convergence studies are presented and the required resolution to capture transition at different Reynolds numbers is investigated. Numerical results show rapid convergence and excellent agreement with experimental data. In short, this work aims to demonstrate the potential of high-order ILES for simulating transitional aerodynamic flows. This is illustrated through numerical results and supported by theoretical considerations.

  1. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    Science.gov (United States)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  2. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  3. Simulation of therapeutic electron beam tracking through a non-uniform magnetic field using finite element method.

    Science.gov (United States)

    Tahmasebibirgani, Mohammad Javad; Maskani, Reza; Behrooz, Mohammad Ali; Zabihzadeh, Mansour; Shahbazian, Hojatollah; Fatahiasl, Jafar; Chegeni, Nahid

    2017-04-01

    In radiotherapy, megaelectron volt (MeV) electrons are employed for treatment of superficial cancers. Magnetic fields can be used for deflection and deformation of the electron flow. A magnetic field is composed of non-uniform permanent magnets. The primary electrons are not mono-energetic and completely parallel. Calculation of electron beam deflection requires using complex mathematical methods. In this study, a device was made to apply a magnetic field to an electron beam and the path of electrons was simulated in the magnetic field using finite element method. A mini-applicator equipped with two neodymium permanent magnets was designed that enables tuning the distance between magnets. This device was placed in a standard applicator of Varian 2100 CD linear accelerator. The mini-applicator was simulated in CST Studio finite element software. Deflection angle and displacement of the electron beam was calculated after passing through the magnetic field. By determining a 2 to 5cm distance between two poles, various intensities of transverse magnetic field was created. The accelerator head was turned so that the deflected electrons became vertical to the water surface. To measure the displacement of the electron beam, EBT2 GafChromic films were employed. After being exposed, the films were scanned using HP G3010 reflection scanner and their optical density was extracted using programming in MATLAB environment. Displacement of the electron beam was compared with results of simulation after applying the magnetic field. Simulation results of the magnetic field showed good agreement with measured values. Maximum deflection angle for a 12 MeV beam was 32.9° and minimum deflection for 15 MeV was 12.1°. Measurement with the film showed precision of simulation in predicting the amount of displacement in the electron beam. A magnetic mini-applicator was made and simulated using finite element method. Deflection angle and displacement of electron beam were calculated. With

  4. Acquiring molecular interference functions of X-ray coherent scattering for breast tissues by combination of simulation and experimental methods

    International Nuclear Information System (INIS)

    Chaparian, A.; Oghabian, M. A.; Changizi, V.

    2009-01-01

    Recently, it has been indicated that X-ray coherent scatter from biological tissues can be used to access signature of tissue. Some scientists are interested in studying this effect to get early detection of breast cancer. Since experimental methods for optimization are time consuming and expensive, some scientists suggest using simulation. Monte Carlo codes are the best option for radiation simulation: however, one permanent defect with Monte Carlo codes has been the lack of a sufficient physical model for coherent (Rayleigh) scattering, including molecular interference effects. Materials and Methods: It was decided to obtain molecular interference functions of coherent X-ray scattering for normal breast tissues by combination of modeling and experimental methods. A Monte Carlo simulation program was written to simulate the angular distribution of scattered photons for the normal breast tissue samples. Moreover, experimental diffraction patterns of these tissues were measured by means of energy dispersive X-ray diffraction method. The simulation and experimental data were used to obtain a tabulation of molecular interference functions for breast tissues. Results: With this study a tabulation of molecular interference functions for normal breast tissues Was prepared to facilitate the simulation diffraction patterns of the tissues without any experimental. Conclusion: The method may lead to design new systems for early detection of breast cancer.

  5. Rapid simulation of spatial epidemics: a spectral method.

    Science.gov (United States)

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Simulation of Micro-Channel and Micro-Orifice Flow Using Lattice Boltzmann Method with Langmuir Slip Model

    Directory of Open Access Journals (Sweden)

    A. R. Rahmati

    2016-12-01

    Full Text Available Because of its kinetic nature and computational advantages, the Lattice Boltzmann method (LBM has been well accepted as a useful tool to simulate micro-scale flows. The slip boundary model plays a crucial role in the accuracy of solutions for micro-channel flow simulations. The most used slip boundary condition is the Maxwell slip model. The results of Maxwell slip model are affected by the accommodation coefficient significantly, but there is not an explicitly relationship between properties at wall and accommodation coefficient. In the present wok, Langmuir slip model is used beside LBM to simulate micro-channel and micro-orifice flows. Slip velocity and nonlinear pressure drop profiles are presented as two major effects in such flows. The results are in good agreement with existing results in the literature.

  7. A new simulation method for turbines in wake - Applied to extreme response during operation

    DEFF Research Database (Denmark)

    Thomsen, K.; Aagaard Madsen, H.

    2005-01-01

    The work focuses on prediction of load response for wind turbines operating in wind forms using a newly developed aeroelostic simulation method The traditionally used concept is to adjust the free flow turbulence intensity to account for increased loads in wind farms-a methodology that might......, the resulting extremes might be erroneous. For blade loads the traditionally used simplified approach works better than for integrated rotor loads-where the instantaneous load gradient across the rotor disc is causing the extreme loads. In the article the new wake simulation approach is illustrated...

  8. A direct simulation method for flows with suspended paramagnetic particles

    NARCIS (Netherlands)

    Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.

    2008-01-01

    A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a

  9. New Results on the Simulation of Particulate Flows

    Energy Technology Data Exchange (ETDEWEB)

    Uhlmann, M.

    2004-07-01

    We propose a new immersed boundary method for the simulation of particulate flows. The fluid solid interaction force is formulate din a direct manner, without resorting to a feed-back mechanisms and thereby avoiding the introduction of additional free parameters. The regularized delta function of Peskin (Acta Numerica, 2002) is used to pass variables between Lagrangian and Eulerian representations, providing for a smooth variation of the hydrodynamic forces while particles are in motion relative to the fixed grid. The application of this scheme to several benchmark problems in two space dimensions demonstrates its feasibility and efficiency. (Author) 9 refs.

  10. New Results on the Simulation of Particulate Flows

    International Nuclear Information System (INIS)

    Uhlmann, M.

    2004-01-01

    We propose a new immersed boundary method for the simulation of particulate flows. The fluid solid interaction force is formulated in a direct manner, without resorting to a feed-back mechanism and thereby avoiding the introduction of additional free parameters. The regularized delta function of Pekin (Acta Numerical, 2002) is used to pass variables between Lagrangian and Eulerian representations, providing for a smooth variation of the hydrodynamic forces while particles are in motion relative to the fixed grid. The application of this schemer to several benchmark problems in two space dimensions demonstrates its feasibility and efficiency. (Author) 9 refs

  11. Simulating colloid hydrodynamics with lattice Boltzmann methods

    International Nuclear Information System (INIS)

    Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J

    2004-01-01

    We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy

  12. Simulation of transients with space-dependent feedback by coarse mesh flux expansion method

    International Nuclear Information System (INIS)

    Langenbuch, S.; Maurer, W.; Werner, W.

    1975-01-01

    For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de

  13. analysis of large electromagnetic pulse simulators using the electric field integral equation method in time domain

    International Nuclear Information System (INIS)

    Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.

    2002-01-01

    A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper

  14. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  15. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  16. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    Science.gov (United States)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  17. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  18. Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations

    Science.gov (United States)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2004-03-01

    The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.

  19. An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems

    KAUST Repository

    Kuwahara, Hiroyuki

    2011-01-01

    Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.

  20. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    Science.gov (United States)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  1. Set simulation of a turbulent arc by Monte-Carlo method

    International Nuclear Information System (INIS)

    Zhukov, M.F.; Devyatov, B.N.; Nazaruk, V.I.

    1982-01-01

    A method of simulation of turbulent arc fluctuations is suggested which is based on the probabilistic set description of conducting channel displacements over the plane not nodes with taking into account the turbulent eddies causing non-uniformity of the field of displacements. The problem is treated in terms of the random set theory. Methods to control the displacements by varying the local displacement sets are described. A local-set approach in the turbulent arc simulation is used for a statistical study of the arc form evolution in a turbulent gas flow. The method implies the performance of numerical experiments on a computer. Various ways to solve the problem of control of the geometric form of an arc column on a model are described. Under consideration are the problems of organization of physical experiments to obtain the required information for the identification of local sets. The suggested method of the application of mathematical experiments is associated with the principles of an operational game. (author)

  2. Stable water isotope simulation by current land-surface schemes:Results of IPILPS phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Henderson-Sellers, A.; Fischer, M.; Aleinov, I.; McGuffie, K.; Riley, W.J.; Schmidt, G.A.; Sturm, K.; Yoshimura, K.; Irannejad, P.

    2005-10-31

    Phase 1 of isotopes in the Project for Intercomparison of Land-surface Parameterization Schemes (iPILPS) compares the simulation of two stable water isotopologues ({sup 1}H{sub 2} {sup 18}O and {sup 1}H{sup 2}H{sup 16}O) at the land-atmosphere interface. The simulations are off-line, with forcing from an isotopically enabled regional model for three locations selected to offer contrasting climates and ecotypes: an evergreen tropical forest, a sclerophyll eucalypt forest and a mixed deciduous wood. Here we report on the experimental framework, the quality control undertaken on the simulation results and the method of intercomparisons employed. The small number of available isotopically-enabled land-surface schemes (ILSSs) limits the drawing of strong conclusions but, despite this, there is shown to be benefit in undertaking this type of isotopic intercomparison. Although validation of isotopic simulations at the land surface must await more, and much more complete, observational campaigns, we find that the empirically-based Craig-Gordon parameterization (of isotopic fractionation during evaporation) gives adequately realistic isotopic simulations when incorporated in a wide range of land-surface codes. By introducing two new tools for understanding isotopic variability from the land surface, the Isotope Transfer Function and the iPILPS plot, we show that different hydrological parameterizations cause very different isotopic responses. We show that ILSS-simulated isotopic equilibrium is independent of the total water and energy budget (with respect to both equilibration time and state), but interestingly the partitioning of available energy and water is a function of the models' complexity.

  3. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  4. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Kyung [Tuskegee Univ., Tuskegee, AL (United States); Fan, Liang-Shih [The Ohio State Univ., Columbus, OH (United States); Zhou, Qiang [The Ohio State Univ., Columbus, OH (United States); Yang, Hui [The Ohio State Univ., Columbus, OH (United States)

    2014-09-30

    fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.

  5. Precision of a FDTD method to simulate cold magnetized plasmas

    International Nuclear Information System (INIS)

    Pavlenko, I.V.; Melnyk, D.A.; Prokaieva, A.O.; Girka, I.O.

    2014-01-01

    The finite difference time domain (FDTD) method is applied to describe the propagation of the transverse electromagnetic waves through the magnetized plasmas. The numerical dispersion relation is obtained in a cold plasma approximation. The accuracy of the numerical dispersion is calculated as a function of the frequency of the launched wave and time step of the numerical grid. It is shown that the numerical method does not reproduce the analytical results near the plasma resonances for any chosen value of time step if there is not a dissipation mechanism in the system. It means that FDTD method cannot be applied straightforward to simulate the problems where the plasma resonances play a key role (for example, the mode conversion problems). But the accuracy of the numerical scheme can be improved by introducing some artificial damping of the plasma currents. Although part of the wave power is lost in the system in this case but the numerical scheme describes the wave processes in an agreement with analytical predictions.

  6. The nonlinear Galerkin method: A multi-scale method applied to the simulation of homogeneous turbulent flows

    Science.gov (United States)

    Debussche, A.; Dubois, T.; Temam, R.

    1993-01-01

    Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.

  7. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.

    2010-01-01

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  8. Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows

    Science.gov (United States)

    Raman, Venkatramanan

    A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.

  9. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Directory of Open Access Journals (Sweden)

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  10. Fretting wear simulation of press-fitted shaft with finite element analysis and influence function method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Hyong; Kwon, Seok Jin [Korea Railroad Research Institute, Uiwang (Korea, Republic of); Choi, Jae Boong; Kim, Young Jin [Sungkyunkwan University, Suwon (Korea, Republic of)

    2008-01-15

    In this paper the fretting wear of press-fitted specimens subjected to a cyclic bending load was simulated using finite element analysis and numerical method. The amount of microslip and contact variable at press-fitted and bending load condition in a press-fitted shaft was analysed by applying finite element method. With the finite element analysis result, a numerical approach was applied to predict fretting wear based on modified Archard's equation and updating the change of contact pressure caused by local wear with influence function method. The predicted wear profiles of press-fitted specimens at the contact edge wear compared with the experimental results obtained by rotating bending fatigue tests. It is shown that the depth of fretting wear by repeated slip between shaft and boss reaches the maximum value at the contact edge. The initial surface profile is continuously changed by the wear at the contact edge, and then the corresponding contact variables are redistributed. The work establishes a basis for numerical simulation of fretting wear on press fits.

  11. Fretting wear simulation of press-fitted shaft with finite element analysis and influence function method

    International Nuclear Information System (INIS)

    Lee, Dong Hyong; Kwon, Seok Jin; Choi, Jae Boong; Kim, Young Jin

    2008-01-01

    In this paper the fretting wear of press-fitted specimens subjected to a cyclic bending load was simulated using finite element analysis and numerical method. The amount of microslip and contact variable at press-fitted and bending load condition in a press-fitted shaft was analysed by applying finite element method. With the finite element analysis result, a numerical approach was applied to predict fretting wear based on modified Archard's equation and updating the change of contact pressure caused by local wear with influence function method. The predicted wear profiles of press-fitted specimens at the contact edge wear compared with the experimental results obtained by rotating bending fatigue tests. It is shown that the depth of fretting wear by repeated slip between shaft and boss reaches the maximum value at the contact edge. The initial surface profile is continuously changed by the wear at the contact edge, and then the corresponding contact variables are redistributed. The work establishes a basis for numerical simulation of fretting wear on press fits

  12. A simple method for potential flow simulation of cascades

    Indian Academy of Sciences (India)

    vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.

  13. Application of a Perturbation Method for Realistic Dynamic Simulation of Industrial Robots

    International Nuclear Information System (INIS)

    Waiboer, R. R.; Aarts, R. G. K. M.; Jonker, J. B.

    2005-01-01

    This paper presents the application of a perturbation method for the closed-loop dynamic simulation of a rigid-link manipulator with joint friction. In this method the perturbed motion of the manipulator is modelled as a first-order perturbation of the nominal manipulator motion. A non-linear finite element method is used to formulate the dynamic equations of the manipulator mechanism. In a closed-loop simulation the driving torques are generated by the control system. Friction torques at the actuator joints are introduced at the stage of perturbed dynamics. For a mathematical model of the friction torques we implemented the LuGre friction model that accounts both for the sliding and pre-sliding regime. To illustrate the method, the motion of a six-axes industrial Staeubli robot is simulated. The manipulation task implies transferring a laser spot along a straight line with a trapezoidal velocity profile. The computed trajectory tracking errors are compared with measured values, where in both cases the tip position is computed from the joint angles using a nominal kinematic robot model. It is found that a closed-loop simulation using a non-linear finite element model of this robot is very time-consuming due to the small time step of the discrete controller. Using the perturbation method with the linearised model a substantial reduction of the computer time is achieved without loss of accuracy

  14. Large eddy simulations of coal jet flame ignition using the direct quadrature method of moments

    Science.gov (United States)

    Pedel, Julien

    The Direct Quadrature Method of Moments (DQMOM) was implemented in the Large Eddy Simulation (LES) tool ARCHES to model coal particles. LES coupled with DQMOM was first applied to nonreacting particle-laden turbulent jets. Simulation results were compared to experimental data and accurately modeled a wide range of particle behaviors, such as particle jet waviness, spreading, break up, particle clustering and segregation, in different configurations. Simulations also accurately predicted the mean axial velocity along the centerline for both the gas phase and the solid phase, thus demonstrating the validity of the approach to model particles in turbulent flows. LES was then applied to the prediction of pulverized coal flame ignition. The stability of an oxy-coal flame as a function of changing primary gas composition (CO2 and O2) was first investigated. Flame stability was measured using optical measurements of the flame standoff distance in a 40 kW pilot facility. Large Eddy Simulations (LES) of the facility provided valuable insight into the experimentally observed data and the importance of factors such as heterogeneous reactions, radiation or wall temperature. The effects of three parameters on the flame stand-off distance were studied and simulation predictions were compared to experimental data using the data collaboration method. An additional validation study of the ARCHES LES tool was then performed on an air-fired pulverized coal jet flame ignited by a preheated gas flow. The simulation results were compared qualitatively and quantitatively to experimental observations for different inlet stoichiometric ratios. LES simulations were able to capture the various combustion regimes observed during flame ignition and to accurately model the flame stand-off distance sensitivity to the stoichiometric ratio. Gas temperature and coal burnout predictions were also examined and showed good agreement with experimental data. Overall, this research shows that high

  15. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  16. Electron-cloud updated simulation results for the PSR, and recent results for the SNS

    International Nuclear Information System (INIS)

    Pivi, M.; Furman, M.A.

    2002-01-01

    Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code

  17. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    International Nuclear Information System (INIS)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-01-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software

  18. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    Energy Technology Data Exchange (ETDEWEB)

    Tisseur, D., E-mail: david.tisseur@cea.fr; Costin, M., E-mail: david.tisseur@cea.fr; Rattoni, B., E-mail: david.tisseur@cea.fr; Vienne, C., E-mail: david.tisseur@cea.fr; Vabre, A., E-mail: david.tisseur@cea.fr; Cattiaux, G., E-mail: david.tisseur@cea.fr [CEA LIST, CEA Saclay 91191 Gif sur Yvette Cedex (France); Sollier, T. [Institut de Radioprotection et de Sûreté Nucléaire, B.P.17 92262 Fontenay-Aux-Roses (France)

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  19. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  20. Comparative performance of different stochastic methods to simulate drug exposure and variability in a population.

    Science.gov (United States)

    Tam, Vincent H; Kabbara, Samer

    2006-10-01

    Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.

  1. Three-dimensional Finite Elements Method simulation of Total Ionizing Dose in 22 nm bulk nFinFETs

    Energy Technology Data Exchange (ETDEWEB)

    Chatzikyriakou, Eleni, E-mail: ec3g12@soton.ac.uk; Potter, Kenneth; Redman-White, William; De Groot, C.H.

    2017-02-15

    Highlights: • Simulation of Total Ionizing Dose using the Finite Elements Method. • Carrier generation, transport and trapping in the oxide. • Application in three-dimensional bulk FinFET model of 22 nm node. • Examination of trapped charge in the Shallow Trench Isolation. • Trapped charge dependency of parasitic transistor current. - Abstract: Finite Elements Method simulation of Total Ionizing Dose effects on 22 nm bulk Fin Field Effect Transistor (FinFET) devices using the commercial software Synopsys Sentaurus TCAD is presented. The simulation parameters are extracted by calibrating the charge trapping model to experimental results on 400 nm SiO{sub 2} capacitors irradiated under zero bias. The FinFET device characteristics are calibrated to the Intel 22 nm bulk technology. Irradiation simulations of the transistor performed with all terminals unbiased reveal increased hardness up to a total dose of 1 MRad(SiO{sub 2}).

  2. Preliminary Groundwater Simulations To Compare Different Reconstruction Methods of 3-d Alluvial Heterogeneity

    Science.gov (United States)

    Teles, V.; de Marsily, G.; Delay, F.; Perrier, E.

    Alluvial floodplains are extremely heterogeneous aquifers, whose three-dimensional structures are quite difficult to model. In general, when representing such structures, the medium heterogeneity is modeled with classical geostatistical or Boolean meth- ods. Another approach, still in its infancy, is called the genetic method because it simulates the generation of the medium by reproducing sedimentary processes. We developed a new genetic model to obtain a realistic three-dimensional image of allu- vial media. It does not simulate the hydrodynamics of sedimentation but uses semi- empirical and statistical rules to roughly reproduce fluvial deposition and erosion. The main processes, either at the stream scale or at the plain scale, are modeled by simple rules applied to "sediment" entities or to conceptual "erosion" entities. The model was applied to a several kilometer long portion of the Aube River floodplain (France) and reproduced the deposition and erosion cycles that occurred during the inferred climate periods (15 000 BP to present). A three-dimensional image of the aquifer was gener- ated, by extrapolating the two-dimensional information collected on a cross-section of the floodplain. Unlike geostatistical methods, this extrapolation does not use a statis- tical spatial analysis of the data, but a genetic analysis, which leads to a more realistic structure. Groundwater flow and transport simulations in the alluvium were carried out with a three-dimensional flow code or simulator (MODFLOW), using different rep- resentations of the alluvial reservoir of the Aube River floodplain: first an equivalent homogeneous medium, and then different heterogeneous media built either with the traditional geostatistical approach simulating the permeability distribution, or with the new genetic model presented here simulating sediment facies. In the latter case, each deposited entity of a given lithology was assigned a constant hydraulic conductivity value. Results of these

  3. A variable hard sphere-based phenomenological inelastic collision model for rarefied gas flow simulations by the direct simulation Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Prasanth, P S; Kakkassery, Jose K; Vijayakumar, R, E-mail: y3df07@nitc.ac.in, E-mail: josekkakkassery@nitc.ac.in, E-mail: vijay@nitc.ac.in [Department of Mechanical Engineering, National Institute of Technology Calicut, Kozhikode - 673 601, Kerala (India)

    2012-04-01

    A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there is equi-partition of energy between the rotational and translational modes and it satisfies the principle of reciprocity or detailed balancing. The present model is applicable for moderate temperatures at which the molecules are in their vibrational ground state. For verification, the model is applied to the DSMC simulations of the translational and rotational energy distributions in nitrogen gas at equilibrium and the results are compared with their corresponding Maxwellian distributions. Next, the Couette flow, the temperature jump and the Rayleigh flow are simulated; the viscosity and thermal conductivity coefficients of nitrogen are numerically estimated and compared with experimentally measured values. The model is further applied to the simulation of the rotational relaxation of nitrogen through low- and high-Mach-number normal shock waves in a novel way. In all cases, the results are found to be in good agreement with theoretically expected and experimentally observed values. It is concluded that the inelastic collision of polyatomic molecules can be predicted well by employing the constructed variable hard sphere (VHS)-based collision model.

  4. Turbulent flow and temperature noise simulation by a multiparticle Monte Carlo method

    International Nuclear Information System (INIS)

    Hughes, G.; Overton, R.S.

    1980-10-01

    A statistical method of simulating real-time temperature fluctuations in liquid sodium pipe flow, for potential application to the estimation of temperature signals generated by subassembly blockages in LMFBRs is described. The method is based on the empirical characterisation of the flow by turbulence intensity and macroscale, radial velocity correlations and spectral form. These are used to produce realisations of the correlated motion of successive batches of representative 'marker particles' released at discrete time intervals into the flow. Temperature noise is generated by the radial mixing of the particles as they move downstream from an assumed mean temperature profile, where they acquire defined temperatures. By employing multi-particle batches, it is possible to perform radial heat transfer calculations, resulting in axial dissipation of the temperature noise levels. A simulated temperature-time signal is built up by recording the temperature at a given point in the flow as each batch of particles reaches the radial measurement plane. This is an advantage over conventional techniques which can usually only predict time-averaged parameters. (U.K.)

  5. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    Science.gov (United States)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  6. Optimal Protection of Reactor Hall Under Nuclear Fuel Container Drop Using Simulation Methods

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents of the optimal design of the damping devices cover of reactor hall under impact of nuclear fuel container drop of type TK C30. The finite element idealization of nuclear power plant structure is used in software ANSYS. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall in comparison with the experimental results. The probabilistic and sensitivity analysis of the damping devices was considered on the base of the simulation methods in program AntHill using the Monte Carlo method.

  7. The effect of carrier gas flow rate and source cell temperature on low pressure organic vapor phase deposition simulation by direct simulation Monte Carlo method

    Science.gov (United States)

    Wada, Takao; Ueda, Noriaki

    2013-01-01

    The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843

  8. The effect of carrier gas flow rate and source cell temperature on low pressure organic vapor phase deposition simulation by direct simulation Monte Carlo method

    Science.gov (United States)

    Wada, Takao; Ueda, Noriaki

    2013-04-01

    The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature.

  9. Comparisons of the simulation results using different codes for ADS spallation target

    International Nuclear Information System (INIS)

    Yu Hongwei; Fan Sheng; Shen Qingbiao; Zhao Zhixiang; Wan Junsheng

    2002-01-01

    The calculations to the standard thick target were made by using different codes. The simulation of the thick Pb target with length of 60 cm, diameter of 20 cm bombarded with 800, 1000, 1500 and 2000 MeV energetic proton beam was carried out. The yields and the spectra of emitted neutron were studied. The spallation target was simulated by SNSP, SHIELD, DCM/CEM (Dubna Cascade Model /Cascade Evaporation Mode) and LAHET codes. The Simulation Results were compared with experiments. The comparisons show good agreement between the experiments and the SNSP simulated leakage neutron yield. The SHIELD simulated leakage neutron spectra are in good agreement with the LAHET and the DCM/CEM simulated leakage neutron spectra

  10. A spectral element method with adaptive segmentation for accurately simulating extracellular electrical stimulation of neurons.

    Science.gov (United States)

    Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J

    2017-05-01

    The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.

  11. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    Science.gov (United States)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  12. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    Science.gov (United States)

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  13. Fish passage through hydropower turbines: Simulating blade strike using the discrete element method

    International Nuclear Information System (INIS)

    Richmond, M C; Romero-Gomez, P

    2014-01-01

    Among the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though hydro-turbines two common physical processes can lead to injury and mortality: collisions/blade-strike and rapid decompression. Several methods are currently available to evaluate these stressors in installed turbines, e.g. using live fish or autonomous sensor devices, and in reduced-scale physical models, e.g. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and rapid pressure change by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions-representing fish collisions with turbine components such as blades-are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for unsteady turbulence using detached eddy simulation (DES), as compared to the conventional practice of simulating the system in steady state (which was also done here for comparison). While both schemes yielded comparable bulk hydraulic performance values, transient conditions exhibited an improvement in describing flow temporal and spatial variability. We released streamtraces (in the steady flow solution) and DEM particles (transient solution) at the same locations where sensor fish (SF) were released in previous field studies of the advanced turbine unit. The streamtrace- based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the

  14. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    Science.gov (United States)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  15. Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.

    Science.gov (United States)

    Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong

    2006-10-01

    We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.

  16. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  17. Evaluation of the constant potential method in simulating electric double-layer capacitors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhenxing; Laird, Brian B., E-mail: blaird@ku.edu [Department of Chemistry, University of Kansas, Lawrence, Kansas 66045 (United States); Yang, Yang; Olmsted, David L.; Asta, Mark [Department of Materials Science and Engineering, University of California, Berkeley, California 94720 (United States)

    2014-11-14

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO{sub 4}-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of “inner-sphere adsorbed” Li{sup +} ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li{sup +} ions to the electrode surface.

  18. Evaluation of the constant potential method in simulating electric double-layer capacitors

    International Nuclear Information System (INIS)

    Wang, Zhenxing; Laird, Brian B.; Yang, Yang; Olmsted, David L.; Asta, Mark

    2014-01-01

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO 4 -acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of “inner-sphere adsorbed” Li + ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li + ions to the electrode surface

  19. Steam generator tube rupture simulation using extended finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken

    2016-08-15

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  20. Steam generator tube rupture simulation using extended finite element method

    International Nuclear Information System (INIS)

    Mohanty, Subhasish; Majumdar, Saurin; Natesan, Ken

    2016-01-01

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  1. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  2. High-order dynamic lattice method for seismic simulation in anisotropic media

    Science.gov (United States)

    Hu, Xiaolin; Jia, Xiaofeng

    2018-03-01

    The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.

  3. Method of transport simulation for electrons between 10eV and 30keV

    International Nuclear Information System (INIS)

    Terrissol, Michel.

    1978-01-01

    A transport simulation of low energy electrons in matter using a Monte-Carlo method and studying all the interactions of the electrons with atoms, molecules or assembly of them is described. Elastic scattering, ionization, excitation, plasmon creation, reorganization following inner-shell ionization, electron-hole pair creation ... are simulated individually by sampling of confirmed experimental or theoretical cross sections. So atomic and molecular gases, metals such as aluminium and liquid water have been studied. The simulation allows to follow the electrons until their energy reaches the atomic or molecular ionization potential of the irradiated matter. The entire trajectories of primary electron and of all secondaries set in motion are exactly reproduced. Several applications to multiple scattering, radiobiology, microdosimetry, electronic microscope are represented and some results are directly compared with experimental ones [fr

  4. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    Science.gov (United States)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  5. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    International Nuclear Information System (INIS)

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; Young, Mitchell T.H.; Kochunas, Brendan; Graham, Aaron; Larsen, Edward W.; Downar, Thomas; Godfrey, Andrew

    2016-01-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  6. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Benjamin, E-mail: collinsbs@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Stimpson, Shane, E-mail: stimpsonsg@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Kelley, Blake W., E-mail: kelleybl@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Young, Mitchell T.H., E-mail: youngmit@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Kochunas, Brendan, E-mail: bkochuna@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Graham, Aaron, E-mail: aarograh@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Larsen, Edward W., E-mail: edlarsen@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Downar, Thomas, E-mail: downar@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Godfrey, Andrew, E-mail: godfreyat@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States)

    2016-12-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  7. Screening of groundwater remedial alternatives for brownfield sites: a comprehensive method integrated MCDA with numerical simulation.

    Science.gov (United States)

    Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu

    2018-06-01

    Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.

  8. Interactive knowledge discovery from marketing questionnarie using simulated breeding and inductive learning methods

    Energy Technology Data Exchange (ETDEWEB)

    Terano, Takao [Univ. of Tsukuba, Tokyo (Japan); Ishino, Yoko [Univ. of Tokyo (Japan)

    1996-12-31

    This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm (GA) based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. In this paper, we show a basic interactive version of the method and two variations: the one with semi-automated GA phases and the one with the relatively evaluation phase via the Analytic Hierarchy Process (AHP). The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data.

  9. Simulation of Thermal Flow Problems via a Hybrid Immersed Boundary-Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    J. Wu

    2012-01-01

    Full Text Available A hybrid immersed boundary-lattice Boltzmann method (IB-LBM is presented in this work to simulate the thermal flow problems. In current approach, the flow field is resolved by using our recently developed boundary condition-enforced IB-LBM (Wu and Shu, (2009. The nonslip boundary condition on the solid boundary is enforced in simulation. At the same time, to capture the temperature development, the conventional energy equation is resolved. To model the effect of immersed boundary on temperature field, the heat source term is introduced. Different from previous studies, the heat source term is set as unknown rather than predetermined. Inspired by the idea in (Wu and Shu, (2009, the unknown is calculated in such a way that the temperature at the boundary interpolated from the corrected temperature field accurately satisfies the thermal boundary condition. In addition, based on the resolved temperature correction, an efficient way to compute the local and average Nusselt numbers is also proposed in this work. As compared with traditional implementation, no approximation for temperature gradients is required. To validate the present method, the numerical simulations of forced convection are carried out. The obtained results show good agreement with data in the literature.

  10. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...

  11. Overview of DOS attacks on wireless sensor networks and experimental results for simulation of interference attacks

    Directory of Open Access Journals (Sweden)

    Željko Gavrić

    2018-01-01

    Full Text Available Wireless sensor networks are now used in various fields. The information transmitted in the wireless sensor networks is very sensitive, so the security issue is very important. DOS (denial of service attacks are a fundamental threat to the functioning of wireless sensor networks. This paper describes some of the most common DOS attacks and potential methods of protection against them. The case study shows one of the most frequent attacks on wireless sensor networks – the interference attack. In the introduction of this paper authors assume that the attack interference can cause significant obstruction of wireless sensor networks. This assumption has been proved in the case study through simulation scenario and simulation results.

  12. The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method

    Directory of Open Access Journals (Sweden)

    Dipakkumar Gohil

    2012-06-01

    Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.

  13. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  14. Dynamical electron diffraction simulation for non-orthogonal crystal system by a revised real space method.

    Science.gov (United States)

    Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G

    2015-01-01

    In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  15. Training simulator for nuclear power plant reactor control model and method

    International Nuclear Information System (INIS)

    Czerbuejewski, F.R.

    1975-01-01

    A description is given of a method and system for the real-time dynamic simulation of a nuclear power plant for training purposes, wherein a control console has a plurality of manual and automatic remote control devices for operating simulated control rods and has indicating devices for monitoring the physical operation of a simulated reactor. Digital computer means are connected to the control console to calculate data values for operating the monitoring devices in accordance with the control devices. The simulation of the reactor control rod mechanism is disclosed whereby the digital computer means operates the rod position monitoring devices in a real-time that is a fraction of the computer time steps and simulates the quick response of a control rod remote control lever together with the delayed response upon a change of direction

  16. Simulation study on unfolding methods for diagnostic X-rays and mixed gamma rays

    International Nuclear Information System (INIS)

    Hashimoto, Makoto; Ohtaka, Masahiko; Ara, Kuniaki; Kanno, Ikuo; Imamura, Ryo; Mikami, Kenta; Nomiya, Seiichiro; Onabe, Hideaki

    2009-01-01

    A photon detector operating in current mode that can sense X-ray energy distribution has been reported. This detector consists of a row of several segment detectors. The energy distribution is derived using an unfolding technique. In this paper, comparisons of the unfolding techniques among error reduction, spectrum surveillance, and neural network methods are discussed through simulation studies on the detection of diagnostic X-rays and gamma rays emitted by a mixture of 137 Cs and 60 Co. For diagnostic X-ray measurement, the spectrum surveillance and neural network methods appeared promising, while the error reduction method yielded poor results. However, in the case of measuring mixtures of gamma rays, the error reduction method was both sufficient and effective. (author)

  17. 2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method

    Science.gov (United States)

    Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)

    2000-01-01

    The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.

  18. Validation and application of an high-order spectral difference method for flow induced noise simulation

    KAUST Repository

    Parsani, Matteo

    2011-09-01

    The main goal of this paper is to develop an efficient numerical algorithm to compute the radiated far field noise provided by an unsteady flow field from bodies in arbitrary motion. The method computes a turbulent flow field in the near fields using a high-order spectral difference method coupled with large-eddy simulation approach. The unsteady equations are solved by advancing in time using a second-order backward difference formulae scheme. The nonlinear algebraic system arising from the time discretization is solved with the nonlinear lowerupper symmetric GaussSeidel algorithm. In the second step, the method calculates the far field sound pressure based on the acoustic source information provided by the first step simulation. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases. The test cases used are: a laminar flow over a two-dimensional (2D) open cavity at Re = 1.5 × 10 3 and M = 0.15 and a laminar flow past a 2D square cylinder at Re = 200 and M = 0.5. In order to show the application of the numerical method in industrial cases and to assess its capability for sound field simulation, a three-dimensional turbulent flow in a muffler at Re = 4.665 × 10 4 and M = 0.05 has been chosen as a third test case. The flow results show good agreement with numerical and experimental reference solutions. Comparison of the computed noise results with those of reference solutions also shows that the numerical approach predicts noise accurately. © 2011 IMACS.

  19. Impact of dynamic specimen shape evolution on the atom probe tomography results of doped epitaxial oxide multilayers: Comparison of experiment and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Madaan, Nitesh; Nandasiri, Manjula; Devaraj, Arun, E-mail: arun.devaraj@pnnl.gov [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, 3335 Innovation Boulevard, Richland, Washington 99354 (United States); Bao, Jie [Energy and Environment Directorate, Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99354 (United States); Xu, Zhijie [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99354 (United States); Thevuthasan, Suntharampillai [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, 3335 Innovation Boulevard, Richland, Washington 99354 (United States); Qatar Environment and Energy Research Institute, Qatar Foundation, PO Box 5825, Doha (Qatar)

    2015-08-31

    The experimental atom probe tomography (APT) results from two different specimen orientations (top-down and sideways) of a high oxygen ion conducting Samaria-doped-ceria/Scandia-stabilized-zirconia multilayer thin film solid oxide fuel cell electrolyte was compared with level-set method based field evaporation simulations for the same specimen orientations. This experiment-simulation comparison explains the dynamic specimen shape evolution and ion trajectory aberrations that can induce density artifacts in final reconstruction, leading to inaccurate estimation of interfacial intermixing. This study highlights the importance of comparing experimental results with field evaporation simulations when using APT to study oxide heterostructure interfaces.

  20. Solar Potential Analysis and Integration of the Time-Dependent Simulation Results for Semantic 3d City Models Using Dynamizers

    Science.gov (United States)

    Chaturvedi, K.; Willenborg, B.; Sindram, M.; Kolbe, T. H.

    2017-10-01

    Semantic 3D city models play an important role in solving complex real-world problems and are being adopted by many cities around the world. A wide range of application and simulation scenarios directly benefit from the adoption of international standards such as CityGML. However, most of the simulations involve properties, whose values vary with respect to time, and the current generation semantic 3D city models do not support time-dependent properties explicitly. In this paper, the details of solar potential simulations are provided operating on the CityGML standard, assessing and estimating solar energy production for the roofs and facades of the 3D building objects in different ways. Furthermore, the paper demonstrates how the time-dependent simulation results are better-represented inline within 3D city models utilizing the so-called Dynamizer concept. This concept not only allows representing the simulation results in standardized ways, but also delivers a method to enhance static city models by such dynamic property values making the city models truly dynamic. The dynamizer concept has been implemented as an Application Domain Extension of the CityGML standard within the OGC Future City Pilot Phase 1. The results are given in this paper.

  1. A fast exact simulation method for a class of Markov jump processes.

    Science.gov (United States)

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  2. Simulation of the acoustic wave propagation using a meshless method

    Directory of Open Access Journals (Sweden)

    Bajko J.

    2017-01-01

    Full Text Available This paper presents numerical simulations of the acoustic wave propagation phenomenon modelled via Linearized Euler equations. A meshless method based on collocation of the strong form of the equation system is adopted. Moreover, the Weighted least squares method is used for local approximation of derivatives as well as stabilization technique in a form of spatial ltering. The accuracy and robustness of the method is examined on several benchmark problems.

  3. Application of Monte Carlo method in forward simulation of azimuthal gamma imaging while drilling

    International Nuclear Information System (INIS)

    Yuan Chao; Zhou Cancan; Zhang Feng; Chen Zhi

    2014-01-01

    Monte Carlo simulation is one of the most important numerical simulation methods in nuclear logging. Formation models can be conveniently built with MCNP code, which provides a simple and effective approach for fundamental study of nuclear logging. Monte Carlo method is employed to set up formation models under logging while drilling condition, and the characteristic of azimuthal gamma imaging is simulated. The results present that the azimuthal gamma imaging shows a sinusoidal curve features. The imaging can be used to accurately calculate the relative dip angle of borehole and thickness of radioactive formation. The larger relative dip angle of borehole and the thicker radioactive formation lead to the larger height of the sinusoidal curve in the imaging. The borehole size has no affect for the calculation of the relative dip angle, but largely affects the determination of formation thickness. The standoff of logging tool has great influence for the calculation of the relative dip angle and formation thickness. If the gamma ray counts meet the demand of counting statistics in nuclear logging, the effect of borehole fluid on the imaging can be ignored. (authors)

  4. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  5. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  6. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    Science.gov (United States)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation

  7. Direct numerical simulation of turbulent pipe flow using the lattice Boltzmann method

    Science.gov (United States)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2018-03-01

    In this paper, we present a first direct numerical simulation (DNS) of a turbulent pipe flow using the mesoscopic lattice Boltzmann method (LBM) on both a D3Q19 lattice grid and a D3Q27 lattice grid. DNS of turbulent pipe flows using LBM has never been reported previously, perhaps due to inaccuracy and numerical stability associated with the previous implementations of LBM in the presence of a curved solid surface. In fact, it was even speculated that the D3Q19 lattice might be inappropriate as a DNS tool for turbulent pipe flows. In this paper, we show, through careful implementation, accurate turbulent statistics can be obtained using both D3Q19 and D3Q27 lattice grids. In the simulation with D3Q19 lattice, a few problems related to the numerical stability of the simulation are exposed. Discussions and solutions for those problems are provided. The simulation with D3Q27 lattice, on the other hand, is found to be more stable than its D3Q19 counterpart. The resulting turbulent flow statistics at a friction Reynolds number of Reτ = 180 are compared systematically with both published experimental and other DNS results based on solving the Navier-Stokes equations. The comparisons cover the mean-flow profile, the r.m.s. velocity and vorticity profiles, the mean and r.m.s. pressure profiles, the velocity skewness and flatness, and spatial correlations and energy spectra of velocity and vorticity. Overall, we conclude that both D3Q19 and D3Q27 simulations yield accurate turbulent flow statistics. The use of the D3Q27 lattice is shown to suppress the weak secondary flow pattern in the mean flow due to numerical artifacts.

  8. A Finite Element Method for Simulation of Compressible Cavitating Flows

    Science.gov (United States)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  9. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  10. Simulation of three-dimensional, time-dependent, incompressible flows by a finite element method

    International Nuclear Information System (INIS)

    Chan, S.T.; Gresho, P.M.; Lee, R.L.; Upson, C.D.

    1981-01-01

    A finite element model has been developed for simulating the dynamics of problems encountered in atmospheric pollution and safety assessment studies. The model is based on solving the set of three-dimensional, time-dependent, conservation equations governing incompressible flows. Spatial discretization is performed via a modified Galerkin finite element method, and time integration is carried out via the forward Euler method (pressure is computed implicitly, however). Several cost-effective techniques (including subcycling, mass lumping, and reduced Gauss-Legendre quadrature) which have been implemented are discussed. Numerical results are presented to demonstrate the applicability of the model

  11. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    Energy Technology Data Exchange (ETDEWEB)

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  12. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    Directory of Open Access Journals (Sweden)

    Nieciąg Halina

    2015-10-01

    Full Text Available Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling was implemented as alternative to the simple sampling schema of classic algorithm.

  13. Methods uncovering usability issues in medication-related alerting functions: results from a systematic review.

    Science.gov (United States)

    Marcilly, Romaric; Vasseur, Francis; Ammenwerth, Elske; Beuscart-Zephir, Marie-Catherine

    2014-01-01

    This paper aims at listing the methods used to evaluate the usability of medication-related alerting functions and at knowing what type of usability issues those methods allow to detect. A sub-analysis of data from this systematic review has been performed. Methods applied in the included papers were collected. Then, included papers were sorted in four types of evaluation: "expert evaluation", "user- testing/simulation", "on site observation" and "impact studies". The types of usability issues (usability flaws, usage problems and negative outcomes) uncovered by those evaluations were analyzed. Results show that a large set of methods are used. The largest proportion of papers uses "on site observation" evaluation. This is the only evaluation type for which every kind of usability flaws, usage problems and outcomes are detected. It is somehow surprising that, in a usability systematic review, most of the papers included use a method that is not often presented as a usability method. Results are discussed about the opportunity to provide usability information collected after the implementation of the technology during their design process, i.e. before their implementation.

  14. Numerical simulation of bubble growth and departure during flow boiling period by lattice Boltzmann method

    International Nuclear Information System (INIS)

    Sun, Tao; Li, Weizhong; Yang, Shuai

    2013-01-01

    Highlights: • The bubble departure diameter is proportional to g −0.425 in quiescent fluid. • The bubble release frequency is proportional to g 0.678 in quiescent fluid. • The simulation result supports the transient micro-convection model. • The bubble departure diameter has exponential relation with inlet velocity. • The bubble release frequency has linear relation with inlet velocity. -- Abstract: Nucleate boiling flows on a horizontal plate are studied in this paper by a hybrid lattice Boltzmann method, where both quiescent and slowly flowing ambient are concerned. The process of a single bubble growth on and departure from the superheated wall is simulated. The simulation result supports the transient micro-convection model. The bubble departure diameter and the release frequency are investigated from the simulation result. It is found that the bubble departure diameter and the release frequency are proportional to g −0.425 and g 0.678 in quiescent fluid, respectively, where g is the gravitational acceleration. Nucleate boiling in slowly flowing ambient is also calculated in consideration of forced convection. It is presented that the bubble departure diameter and the release frequency have exponential relationship and linear relationship with inlet velocity in slowly flowing fluid, respectively

  15. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    Science.gov (United States)

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  16. Wave fields simulation in difficult terrain using numerical grid method; Hyoko henka no aru chiiki deno suchi koshi wo mochiita hado simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jung, W; Ogawa, T [Yokohama National University, Yokohama (Japan); Tamagawa, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1997-10-22

    This paper describes that a high-accuracy simulation can be made on seismic exploration by using the numerical grid method. When applying a wave field simulation using the difference calculus to an area subjected to seismic exploration, a problem occurs as to how a boundary of the velocity structure including the ground surface should be dealt with. Simply applying grids to a boundary changing continuously makes accuracy of the simulation worse. The difference calculus using a numerical grid is a method to solve the problem by imaging a certain region into a rectangular region through use of variable conversion, which can impose the boundary condition more accurately. The wave field simulation was carried out on a simple two-layer inclined structure and a two-layer waved structure. It was revealed that amplitudes of direct waves and reflection waves are disturbed in the case where no numerical grid method is applied, and the amplitudes are more disperse in the reflection waves than those obtained by using the numerical grid method. 7 refs., 10 figs.

  17. A Three-Dimensional, Immersed Boundary, Finite Volume Method for the Simulation of Incompressible Heat Transfer Flows around Complex Geometries

    Directory of Open Access Journals (Sweden)

    Hassan Badreddine

    2017-01-01

    Full Text Available The current work focuses on the development and application of a new finite volume immersed boundary method (IBM to simulate three-dimensional fluid flows and heat transfer around complex geometries. First, the discretization of the governing equations based on the second-order finite volume method on Cartesian, structured, staggered grid is outlined, followed by the description of modifications which have to be applied to the discretized system once a body is immersed into the grid. To validate the new approach, the heat conduction equation with a source term is solved inside a cavity with an immersed body. The approach is then tested for a natural convection flow in a square cavity with and without circular cylinder for different Rayleigh numbers. The results computed with the present approach compare very well with the benchmark solutions. As a next step in the validation procedure, the method is tested for Direct Numerical Simulation (DNS of a turbulent flow around a surface-mounted matrix of cubes. The results computed with the present method compare very well with Laser Doppler Anemometry (LDA measurements of the same case, showing that the method can be used for scale-resolving simulations of turbulence as well.

  18. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    Science.gov (United States)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  19. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    International Nuclear Information System (INIS)

    Lollchund, M R; Dookhitram, K; Sunhaloo, M S; Boojhawon, R

    2013-01-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  20. Numerical simulation of bubble deformation in magnetic fluids by finite volume method

    International Nuclear Information System (INIS)

    Yamasaki, Haruhiko; Yamaguchi, Hiroshi

    2017-01-01

    Bubble deformation in magnetic fluids under magnetic field is investigated numerically by an interface capturing method. The numerical method consists of a coupled level-set and VOF (Volume of Fluid) method, combined with conservation CIP (Constrained Interpolation Profile) method with the self-correcting procedure. In the present study considering actual physical properties of magnetic fluid, bubble deformation under given uniform magnetic field is analyzed for internal magnetic field passing through a magnetic gaseous and liquid phase interface. The numerical results explain the mechanism of bubble deformation under presence of given magnetic field. - Highlights: • A magnetic field analysis is developed to simulate the bubble dynamics in magnetic fluid with two-phase interface. • The elongation of bubble increased with increasing magnetic flux intensities due to strong magnetic normal force. • Proposed technique explains the bubble dynamics, taking into account of the continuity of the magnetic flux density.