Directory of Open Access Journals (Sweden)
Kaushikbhai C. Parmar
2017-04-01
Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.
A method for data handling numerical results in parallel OpenFOAM simulations
International Nuclear Information System (INIS)
nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian
2015-01-01
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms
A method for data handling numerical results in parallel OpenFOAM simulations
Energy Technology Data Exchange (ETDEWEB)
Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
International Nuclear Information System (INIS)
BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.
2002-01-01
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed
Energy Technology Data Exchange (ETDEWEB)
BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.
2002-06-03
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.
Comparison of multiple-criteria decision-making methods - results of simulation study
Directory of Open Access Journals (Sweden)
Michał Adamczak
2016-12-01
Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.
Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method
Directory of Open Access Journals (Sweden)
Ghanmi Helmi
2018-05-01
Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.
International Nuclear Information System (INIS)
Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro
2017-01-01
The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)
International Nuclear Information System (INIS)
Boonekamp, Piet G.M.
2006-01-01
Starting from the conditions for a successful implementation of saving options, a general framework was developed to investigate possible interaction effects in sets of energy policy measures. Interaction regards the influence of one measure on the energy saving effect of another measure. The method delivers a matrix for all combinations of measures, with each cell containing qualitative information on the strength and type of interaction: overlapping, reinforcing, or independent of each other. Results are presented for the set of policy measures on household energy efficiency in the Netherlands for 1990-2003. The second part regards a quantitative analysis of the interaction effects between three major measures: a regulatory energy tax, investment subsidies and regulation of gas use for space heating. Using a detailed bottom-up model, household energy use in the period 1990-2000 was simulated with and without these measures. The results indicate that combinations of two or three policy measures yield 13-30% less effect than the sum of the effects of the separate measures
GEM simulation methods development
International Nuclear Information System (INIS)
Tikhonov, V.; Veenhof, R.
2002-01-01
A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment
Directory of Open Access Journals (Sweden)
P. V. Kriksin
2017-01-01
Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been
Methods of channeling simulation
International Nuclear Information System (INIS)
Barrett, J.H.
1989-06-01
Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs
Energy Technology Data Exchange (ETDEWEB)
Cester, Francesco; Deitenbeck, Helmuth; Kuentzel, Matthias; Scheuer, Josef; Voggenberger, Thomas
2015-04-15
The overall objective of the project is to develop a general simulation environment for program systems used in reactor safety analysis. The simulation environment provides methods for graphical modeling and evaluation of results for the simulation models. The terms of graphical modeling and evaluation of results summarize computerized methods of pre- and postprocessing for the simulation models, which can assist the user in the execution of the simulation steps. The methods comprise CAD (''Computer Aided Design'') based input tools, interactive user interfaces for the execution of the simulation and the graphical representation and visualization of the simulation results. A particular focus was set on the requirements of the system code ATHLET. A CAD tool was developed that allows the specification of 3D geometry of the plant components and the discretization with a simulation grid. The system provides inter-faces to generate the input data of the codes and to export the data for the visualization software. The CAD system was applied for the modeling of a cooling circuit and reactor pressure vessel of a PWR. For the modeling of complex systems with many components, a general purpose graphical network editor was adapted and expanded. The editor is able to simulate networks with complex topology graphically by suitable building blocks. The network editor has been enhanced and adapted to the modeling of balance of plant and thermal fluid systems in ATHLET. For the visual display of the simulation results in the local context of the 3D geometry and the simulation grid, the open source program ParaView is applied, which is widely used for 3D visualization of field data, offering multiple options for displaying and ana-lyzing the data. New methods were developed, that allow the necessary conversion of the results of the reactor safety codes and the data of the CAD models. The trans-formed data may then be imported into ParaView and visualized. The
Reddy, M Rami; Erion, Mark D
2009-12-01
Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.
DEFF Research Database (Denmark)
Deroba, J. J.; Butterworth, D. S.; Methot, R. D.
2015-01-01
The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments to dat...
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of); Hur, Sam Suk [Sam Yong Inspection Engineering Co., Ltd., Seoul (Korea, Republic of)
2016-11-15
Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the
International Nuclear Information System (INIS)
Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong; Hur, Sam Suk
2016-01-01
Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the safety goal
New methods in plasma simulation
International Nuclear Information System (INIS)
Mason, R.J.
1990-01-01
The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs
Milestone M4900: Simulant Mixing Analytical Results
Energy Technology Data Exchange (ETDEWEB)
Kaplan, D.I.
2001-07-26
This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.
2-d Simulations of Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm
2004-01-01
One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...
Summarizing Simulation Results using Causally-relevant States
Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth
2016-01-01
As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620
Titan's organic chemistry: Results of simulation experiments
Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.
1992-01-01
Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.
Simulation Results of Double Forward Converter
Directory of Open Access Journals (Sweden)
P. Vijaya KUMAR
2009-12-01
Full Text Available This work aims to find a better forward converter for DC to DC conversion.Simulation of double forward converter in SMPS system is discussed in this paper. Aforward converter with RCD snubber to synchronous rectifier and/or to current doubleris also discussed. The evolution of the forward converter is first reviewed in a tutorialfashion. Performance parameters are discussed including operating principle, voltageconversion ratio, efficiency, device stress, small-signal dynamics, noise and EMI. Itscircuit operation and its performance characteristics of the forward converter with RCDsnubber and double forward converter are described and the simulation results arepresented.
DEFF Research Database (Denmark)
Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G
2016-01-01
a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
Numerical methods used in simulation
International Nuclear Information System (INIS)
Caseau, Paul; Perrin, Michel; Planchard, Jacques
1978-01-01
The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr
First results from simulations of supersymmetric lattices
Catterall, Simon
2009-01-01
We conduct the first numerical simulations of lattice theories with exact supersymmetry arising from the orbifold constructions of \\cite{Cohen:2003xe,Cohen:2003qw,Kaplan:2005ta}. We consider the Script Q = 4 theory in D = 0,2 dimensions and the Script Q = 16 theory in D = 0,2,4 dimensions. We show that the U(N) theories do not possess vacua which are stable non-perturbatively, but that this problem can be circumvented after truncation to SU(N). We measure the distribution of scalar field eigenvalues, the spectrum of the fermion operator and the phase of the Pfaffian arising after integration over the fermions. We monitor supersymmetry breaking effects by measuring a simple Ward identity. Our results indicate that simulations of Script N = 4 super Yang-Mills may be achievable in the near future.
International Nuclear Information System (INIS)
Kiviranta, Sauli; Saarinen, Hannu; Maekinen, Harri; Krassi, Boris
2011-01-01
A full scale physical test facility, DTP2 (Divertor Test Platform 2) has been established in Finland for demonstrating and refining the Remote Handling (RH) equipment designs for ITER. The first prototype RH equipment at DTP2 is the Cassette Multifunctional Mover (CMM) equipped with Second Cassette End Effector (SCEE) delivered to DTP2 in October 2008. The purpose is to prove that CMM/SCEE prototype can be used successfully for the 2nd cassette RH operations. At the end of F4E grant 'DTP2 test facility operation and upgrade preparation', the RH operations of the 2nd cassette were successfully demonstrated to the representatives of Fusion For Energy (F4E). Due to its design, the CMM/SCEE robot has relatively large mechanical flexibilities when the robot carries the nine-ton-weighting 2nd Cassette on the 3.6-m long lever. This leads into a poor absolute accuracy and into the situation where the 3D model, which is used in the control system, does not reflect the actual deformed state of the CMM/SCEE robot. To improve the accuracy, the new method has been developed in order to handle the flexibilities within the control system's virtual environment. The effect of the load on the CMM/SCEE has been measured and minimized in the load compensation model, which is implemented in the control system software. The proposed method accounts for the structural deformations of the robot in the control system through the 3D model morphing by utilizing the finite element method (FEM) analysis for morph targets. This resulted in a considerable improvement of the CMM/SCEE absolute accuracy and the adequacy of the 3D model, which is crucially important in the RH applications, where the visual information of the controlled device in the surrounding environment is limited.
Reconstructing the ideal results of a perturbed analog quantum simulator
Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael
2018-04-01
Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.
2015-01-07
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
An improved method for simulating radiographs
International Nuclear Information System (INIS)
Laguna, G.W.
1986-01-01
The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials
Trojan Horse Method: Recent Results
International Nuclear Information System (INIS)
Pizzone, R. G.; Spitaleri, C.
2008-01-01
Owing the presence of the Coulomb barrier at astrophysically relevant kinetic energies, it is very difficult, or sometimes impossible to measure astrophysical reaction rates in laboratory. This is why different indirect techniques are being used along with direct measurements. The THM is unique indirect technique allowing one measure astrophysical rearrangement reactions down to astrophysical relevant energies. The basic principle and a review of the main application of the Trojan Horse Method are presented. The applications aiming at the extraction of the bare S b (E) astrophysical factor and electron screening potentials U e for several two body processes are discussed
Medical Simulation Practices 2010 Survey Results
McCrindle, Jeffrey J.
2011-01-01
Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity
Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele
2017-01-01
In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results. PMID:28966778
Presenting simulation results in a nested loop plot.
Rücker, Gerta; Schwarzer, Guido
2014-12-12
Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.
Saltstone Matrix Characterization And Stadium Simulation Results
International Nuclear Information System (INIS)
Langton, C.
2009-01-01
SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM(reg s ign) service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM(reg s ign) concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Methods for simulating turbulent phase screen
International Nuclear Information System (INIS)
Zhang Jianzhu; Zhang Feizhou; Wu Yi
2012-01-01
Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)
Detector Simulation: Data Treatment and Analysis Methods
Apostolakis, J
2011-01-01
Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...
Isogeometric methods for numerical simulation
Bordas, Stéphane
2015-01-01
The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.
Meshless Method for Simulation of Compressible Flow
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow
Collaborative simulation method with spatiotemporal synchronization process control
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
Comparison of validation methods for forming simulations
Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus
2018-05-01
The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.
Recent simulation results of the magnetic induction tomography forward problem
Directory of Open Access Journals (Sweden)
Stawicki Krzysztof
2016-06-01
Full Text Available In this paper we present the results of simulations of the Magnetic Induction Tomography (MIT forward problem. Two complementary calculation techniques have been implemented and coupled, namely: the finite element method (applied in commercial software Comsol Multiphysics and the second, algebraic manipulations on basic relationships of electromagnetism in Matlab. The developed combination saves a lot of time and makes a better use of the available computer resources.
Simulation of tunneling construction methods of the Cisumdawu toll road
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Spectral Methods in Numerical Plasma Simulation
DEFF Research Database (Denmark)
Coutsias, E.A.; Hansen, F.R.; Huld, T.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...
Evaluation of structural reliability using simulation methods
Directory of Open Access Journals (Sweden)
Baballëku Markel
2015-01-01
Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Novel Methods for Electromagnetic Simulation and Design
2016-08-03
modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
Factorization method for simulating QCD at finite density
International Nuclear Information System (INIS)
Nishimura, Jun
2003-01-01
We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
ANOVA parameters influence in LCF experimental data and simulation results
Directory of Open Access Journals (Sweden)
Vercelli A.
2010-06-01
Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo
New method of fast simulation for a hadron calorimeter response
International Nuclear Information System (INIS)
Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.
2003-01-01
In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data
Growth Kinetics of the Homogeneously Nucleated Water Droplets: Simulation Results
International Nuclear Information System (INIS)
Mokshin, Anatolii V; Galimzyanov, Bulat N
2012-01-01
The growth of homogeneously nucleated droplets in water vapor at the fixed temperatures T = 273, 283, 293, 303, 313, 323, 333, 343, 353, 363 and 373 K (the pressure p = 1 atm.) is investigated on the basis of the coarse-grained molecular dynamics simulation data with the mW-model. The treatment of simulation results is performed by means of the statistical method within the mean-first-passage-time approach, where the reaction coordinate is associated with the largest droplet size. It is found that the water droplet growth is characterized by the next features: (i) the rescaled growth law is unified at all the considered temperatures and (ii) the droplet growth evolves with acceleration and follows the power law.
Simulation teaching method in Engineering Optics
Lu, Qieni; Wang, Yi; Li, Hongbin
2017-08-01
We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.
Hybrid Method Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye
This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...
A Simulation Method Measuring Psychomotor Nursing Skills.
McBride, Helena; And Others
1981-01-01
The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…
Comparing three methods for participatory simulation of hospital work systems
DEFF Research Database (Denmark)
Broberg, Ole; Andersen, Simone Nyholm
Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...
Simulation methods for nuclear production scheduling
International Nuclear Information System (INIS)
Miles, W.T.; Markel, L.C.
1975-01-01
Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references
A particle-based method for granular flow simulation
Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua
2012-01-01
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
A particle-based method for granular flow simulation
Chang, Yuanzhang
2012-03-16
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
Lagrangian numerical methods for ocean biogeochemical simulations
Paparella, Francesco; Popolizio, Marina
2018-05-01
We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
A simulation method for lightning surge response of switching power
International Nuclear Information System (INIS)
Wei, Ming; Chen, Xiang
2013-01-01
In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.
Simulating colloid hydrodynamics with lattice Boltzmann methods
International Nuclear Information System (INIS)
Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J
2004-01-01
We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy
Motion simulation of hydraulic driven safety rod using FSI method
International Nuclear Information System (INIS)
Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In
2013-01-01
Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results
Nonequilibrium relaxation method – An alternative simulation strategy
Indian Academy of Sciences (India)
One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...
LOMEGA: a low frequency, field implicit method for plasma simulation
International Nuclear Information System (INIS)
Barnes, D.C.; Kamimura, T.
1982-04-01
Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)
Spectral methods in numerical plasma simulation
International Nuclear Information System (INIS)
Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)
Electromagnetic simulation using the FDTD method
Sullivan, Dennis M
2013-01-01
A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp
Computational fluid dynamics simulations and validations of results
CSIR Research Space (South Africa)
Sitek, MA
2013-09-01
Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...
The WOMBAT Attack Attribution Method: Some Results
Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier
In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.
Method of simulating dose reduction for digital radiographic systems
International Nuclear Information System (INIS)
Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.
2005-01-01
The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)
Activity coefficients from molecular simulations using the OPAS method
Kohns, Maximilian; Horsch, Martin; Hasse, Hans
2017-10-01
A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.
Reliability analysis of neutron transport simulation using Monte Carlo method
International Nuclear Information System (INIS)
Souza, Bismarck A. de; Borges, Jose C.
1995-01-01
This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs
A new method for simulating human emotions
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
How to make machines express emotions would be instrumental in establishing a completely new paradigm for man ma-chine interaction. A new method for simulating and assessing artificial psychology has been developed for the research of the emo-tion robot. The human psychology activity is regarded as a Markov process. An emotion space and psychology model is constructedbased on Markov process. The conception of emotion entropy is presented to assess the artificial emotion complexity. The simulatingresults play up to human psychology activity. This model can also be applied to consumer-friendly human-computer interfaces, andinteractive video etc.
Performance evaluation of sea surface simulation methods for target detection
Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi
2017-11-01
With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.
Electron-cloud simulation results for the PSR and SNS
International Nuclear Information System (INIS)
Pivi, M.; Furman, M.A.
2002-01-01
We present recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos. In particular, a complete refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has been included in the simulation code
Electron-cloud simulation results for the SPS and recent results for the LHC
International Nuclear Information System (INIS)
Furman, M.A.; Pivi, M.T.F.
2002-01-01
We present an update of computer simulation results for some features of the electron cloud at the Large Hadron Collider (LHC) and recent simulation results for the Super Proton Synchrotron (SPS). We focus on the sensitivity of the power deposition on the LHC beam screen to the emitted electron spectrum, which we study by means of a refined secondary electron (SE) emission model recently included in our simulation code
A nondissipative simulation method for the drift kinetic equation
International Nuclear Information System (INIS)
Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya
2001-07-01
With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)
Hospital Registration Process Reengineering Using Simulation Method
Directory of Open Access Journals (Sweden)
Qiang Su
2010-01-01
Full Text Available With increasing competition, many healthcare organizations have undergone tremendous reform in the last decade aiming to increase efficiency, decrease waste, and reshape the way that care is delivered. This study focuses on the operational efficiency improvement of hospital’s registration process. The operational efficiency related factors including the service process, queue strategy, and queue parameters were explored systematically and illustrated with a case study. Guided by the principle of business process reengineering (BPR, a simulation approach was employed for process redesign and performance optimization. As a result, the queue strategy is changed from multiple queues and multiple servers to single queue and multiple servers with a prepare queue. Furthermore, through a series of simulation experiments, the length of the prepare queue and the corresponding registration process efficiency was quantitatively evaluated and optimized.
Adaptive implicit method for thermal compositional reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
A calculation method for RF couplers design based on numerical simulation by microwave studio
International Nuclear Information System (INIS)
Wang Rong; Pei Yuanji; Jin Kai
2006-01-01
A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)
A mixed finite element method for particle simulation in lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-03-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
A mixed finite element method for particle simulation in Lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-01-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
Dynamical simulation of heavy ion collisions; VUU and QMD method
International Nuclear Information System (INIS)
Niita, Koji
1992-01-01
We review two simulation methods based on the Vlasov-Uehling-Uhlenbeck (VUU) equation and Quantum Molecular Dynamics (QMD), which are the most widely accepted theoretical framework for the description of intermediate-energy heavy-ion reactions. We show some results of the calculations and compare them with the experimental data. (author)
Simulating water hammer with corrective smoothed particle method
Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.
2012-01-01
The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Computational Simulations and the Scientific Method
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Efficient method for transport simulations in quantum cascade lasers
Directory of Open Access Journals (Sweden)
Maczka Mariusz
2017-01-01
Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.
The frontal method in hydrodynamics simulations
Walters, R.A.
1980-01-01
The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.
Energy Technology Data Exchange (ETDEWEB)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.
German precursor study: methods and results
International Nuclear Information System (INIS)
Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.
1985-01-01
This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study
Mechanics of Nanostructures: Methods and Results
Ruoff, Rod
2003-03-01
We continue to develop and use new tools to measure the mechanics and electromechanics of nanostructures. Here we discuss: (a) methods for making nanoclamps and the resulting: nanoclamp geometry, chemical composition and type of chemical bonding, and nanoclamp strength (effectiveness as a nanoclamp for the mechanics measurements to be made); (b) mechanics of carbon nanocoils. We have received carbon nanocoils from colleagues in Japan [1], measured their spring constants, and have observed extensions exceeding 100% relative to the unloaded length, using our scanning electron microscope nanomanipulator tool; (c) several new devices that are essentially MEMS-based, that allow for improved measurements of the mechanics of psuedo-1D and planar nanostructures. [1] Zhang M., Nakayama Y., Pan L., Japanese J. Appl. Phys. 39, L1242-L1244 (2000).
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Simulation of Rossi-α method with analog Monte-Carlo method
International Nuclear Information System (INIS)
Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang
2012-01-01
The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)
Computerized simulation methods for dose reduction, in radiodiagnosis
International Nuclear Information System (INIS)
Brochi, M.A.C.
1990-01-01
The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)
Simulation methods with extended stability for stiff biochemical Kinetics
Directory of Open Access Journals (Sweden)
Rué Pau
2010-08-01
Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang
2011-03-01
We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.
Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results
International Nuclear Information System (INIS)
Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.
2015-01-01
The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software
Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results
Energy Technology Data Exchange (ETDEWEB)
Tisseur, D., E-mail: david.tisseur@cea.fr; Costin, M., E-mail: david.tisseur@cea.fr; Rattoni, B., E-mail: david.tisseur@cea.fr; Vienne, C., E-mail: david.tisseur@cea.fr; Vabre, A., E-mail: david.tisseur@cea.fr; Cattiaux, G., E-mail: david.tisseur@cea.fr [CEA LIST, CEA Saclay 91191 Gif sur Yvette Cedex (France); Sollier, T. [Institut de Radioprotection et de Sûreté Nucléaire, B.P.17 92262 Fontenay-Aux-Roses (France)
2015-03-31
The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.
Multiband discrete ordinates method: formalism and results
International Nuclear Information System (INIS)
Luneville, L.
1998-06-01
The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author)
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Directory of Open Access Journals (Sweden)
Erkai Watson
2017-04-01
Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz
2018-01-01
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid
2018-05-17
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Goudarzi, Shervin; Amrollahi, R.; Niknam Sharak, M.
2014-06-01
In this paper the results of the numerical simulation for Amirkabir Mather-type Plasma Focus Facility (16 kV, 36μF and 115 nH) in several experiments with Argon as working gas at different working conditions (different discharge voltages and gas pressures) have been presented and compared with the experimental results. Two different models have been used for simulation: five-phase model of Lee and lumped parameter model of Gonzalez. It is seen that the results (optimum pressures and current signals) of the Lee model at different working conditions show better agreement than lumped parameter model with experimental values.
International Nuclear Information System (INIS)
Goudarzi, Shervin; Amrollahi, R; Sharak, M Niknam
2014-01-01
In this paper the results of the numerical simulation for Amirkabir Mather-type Plasma Focus Facility (16 kV, 36μF and 115 nH) in several experiments with Argon as working gas at different working conditions (different discharge voltages and gas pressures) have been presented and compared with the experimental results. Two different models have been used for simulation: five-phase model of Lee and lumped parameter model of Gonzalez. It is seen that the results (optimum pressures and current signals) of the Lee model at different working conditions show better agreement than lumped parameter model with experimental values.
Numerical methods in simulation of resistance welding
DEFF Research Database (Denmark)
Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi
2015-01-01
Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...
Virtual Crowds Methods, Simulation, and Control
Pelechano, Nuria; Allbeck, Jan
2008-01-01
There are many applications of computer animation and simulation where it is necessary to model virtual crowds of autonomous agents. Some of these applications include site planning, education, entertainment, training, and human factors analysis for building evacuation. Other applications include simulations of scenarios where masses of people gather, flow, and disperse, such as transportation centers, sporting events, and concerts. Most crowd simulations include only basic locomotive behaviors possibly coupled with a few stochastic actions. Our goal in this survey is to establish a baseline o
New Results on the Simulation of Particulate Flows
Energy Technology Data Exchange (ETDEWEB)
Uhlmann, M.
2004-07-01
We propose a new immersed boundary method for the simulation of particulate flows. The fluid solid interaction force is formulate din a direct manner, without resorting to a feed-back mechanisms and thereby avoiding the introduction of additional free parameters. The regularized delta function of Peskin (Acta Numerica, 2002) is used to pass variables between Lagrangian and Eulerian representations, providing for a smooth variation of the hydrodynamic forces while particles are in motion relative to the fixed grid. The application of this scheme to several benchmark problems in two space dimensions demonstrates its feasibility and efficiency. (Author) 9 refs.
New Results on the Simulation of Particulate Flows
International Nuclear Information System (INIS)
Uhlmann, M.
2004-01-01
We propose a new immersed boundary method for the simulation of particulate flows. The fluid solid interaction force is formulated in a direct manner, without resorting to a feed-back mechanism and thereby avoiding the introduction of additional free parameters. The regularized delta function of Pekin (Acta Numerical, 2002) is used to pass variables between Lagrangian and Eulerian representations, providing for a smooth variation of the hydrodynamic forces while particles are in motion relative to the fixed grid. The application of this schemer to several benchmark problems in two space dimensions demonstrates its feasibility and efficiency. (Author) 9 refs
Rainout assessment: the ACRA system and summaries of simulation results
International Nuclear Information System (INIS)
Watson, C.W.; Barr, S.; Allenson, R.E.
1977-09-01
A generalized, three-dimensional, integrated computer code system was developed to estimate collateral-damage threats from precipitation-scavenging (rainout) of airborne debris-clouds from defensive tactical nuclear engagements. This code system, called ACRA for Atmospheric-Contaminant Rainout Assessment, is based on Monte Carlo statistical simulation methods that allow realistic, unbiased simulations of probabilistic storm, wind, and precipitation fields that determine actual magnitudes and probabilities of rainout threats. Detailed models (or data bases) are included for synoptic-scale storm and wind fields; debris transport and dispersal (with the roles of complex flow fields, time-dependent diffusion, and multidimensional shear effects accounted for automatically); microscopic debris-precipitation interactions and scavenging probabilities; air-to-ground debris transport; local demographic features, for assessing actual threats to populations; and nonlinear effects accumulations from multishot scenarios. We simulated several hundred representative shots for West European scenarios and climates to study single-shot and multishot sensitivities of rainout effects to variations in pertinent physical variables
Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method
Directory of Open Access Journals (Sweden)
Francesco Pellicani
2016-05-01
Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.
Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.
Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd
2018-02-01
There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.
Project Oriented Immersion Learning: Method and Results
DEFF Research Database (Denmark)
Icaza, José I.; Heredia, Yolanda; Borch, Ole M.
2005-01-01
A pedagogical approach called “project oriented immersion learning” is presented and tested on a graduate online course. The approach combines the Project Oriented Learning method with immersion learning in a virtual enterprise. Students assumed the role of authors hired by a fictitious publishing...... house that develops digital products including e-books, tutorials, web sites and so on. The students defined the problem that their product was to solve; choose the type of product and the content; and built the product following a strict project methodology. A wiki server was used as a platform to hold...
Learning phacoemulsification. Results of different teaching methods.
Directory of Open Access Journals (Sweden)
Hennig Albrecht
2004-01-01
Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.
BWR Full Integral Simulation Test (FIST). Phase I test results
International Nuclear Information System (INIS)
Hwang, W.S.; Alamgir, M.; Sutherland, W.A.
1984-09-01
A new full height BWR system simulator has been built under the Full-Integral-Simulation-Test (FIST) program to investigate the system responses to various transients. The test program consists of two test phases. This report provides a summary, discussions, highlights and conclusions of the FIST Phase I tests. Eight matrix tests were conducted in the FIST Phase I. These tests have investigated the large break, small break and steamline break LOCA's, as well as natural circulation and power transients. Results and governing phenomena of each test have been evaluated and discussed in detail in this report. One of the FIST program objectives is to assess the TRAC code by comparisons with test data. Two pretest predictions made with TRACB02 are presented and compared with test data in this report
Quantum control with NMR methods: Application to quantum simulations
International Nuclear Information System (INIS)
Negrevergne, Camille
2002-01-01
Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
.... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...
International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
A Ten-Step Design Method for Simulation Games in Logistics Management
Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.
2011-01-01
Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Modeling results for a linear simulator of a divertor
International Nuclear Information System (INIS)
Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.
1993-01-01
A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach ∼ 1 Gw/m 2 along the magnetic fieldlines and > 10 MW/m 2 on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report
RESULTS OF THE QUESTIONNAIRE: ANALYSIS METHODS
Staff Association
2014-01-01
Five-yearly review of employment conditions Article S V 1.02 of our Staff Rules states that the CERN “Council shall periodically review and determine the financial and social conditions of the members of the personnel. These periodic reviews shall consist of a five-yearly general review of financial and social conditions;” […] “following methods […] specified in § I of Annex A 1”. Then, turning to the relevant part in Annex A 1, we read that “The purpose of the five-yearly review is to ensure that the financial and social conditions offered by the Organization allow it to recruit and retain the staff members required for the execution of its mission from all its Member States. […] these staff members must be of the highest competence and integrity.” And for the menu of such a review we have: “The five-yearly review must include basic salaries and may include any other financial or soc...
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
Daylighting simulation: methods, algorithms, and resources
Energy Technology Data Exchange (ETDEWEB)
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but
Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes
International Nuclear Information System (INIS)
Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P
2007-01-01
Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each
Electron-cloud updated simulation results for the PSR, and recent results for the SNS
International Nuclear Information System (INIS)
Pivi, M.; Furman, M.A.
2002-01-01
Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code
Interactive methods for exploring particle simulation data
Energy Technology Data Exchange (ETDEWEB)
Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.
2004-05-01
In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.
Numerical simulation methods for electron and ion optics
International Nuclear Information System (INIS)
Munro, Eric
2011-01-01
This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.
Application of subset simulation methods to dynamic fault tree analysis
International Nuclear Information System (INIS)
Liu Mengyun; Liu Jingquan; She Ding
2015-01-01
Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)
Two different hematocrit detection methods: Different methods, different results?
Directory of Open Access Journals (Sweden)
Schuepbach Reto A
2010-03-01
Full Text Available Abstract Background Less is known about the influence of hematocrit detection methodology on transfusion triggers. Therefore, the aim of the present study was to compare two different hematocrit-assessing methods. In a total of 50 critically ill patients hematocrit was analyzed using (1 blood gas analyzer (ABLflex 800 and (2 the central laboratory method (ADVIA® 2120 and compared. Findings Bland-Altman analysis for repeated measurements showed a good correlation with a bias of +1.39% and 2 SD of ± 3.12%. The 24%-hematocrit-group showed a correlation of r2 = 0.87. With a kappa of 0.56, 22.7% of the cases would have been transfused differently. In the-28%-hematocrit group with a similar correlation (r2 = 0.8 and a kappa of 0.58, 21% of the cases would have been transfused differently. Conclusions Despite a good agreement between the two methods used to determine hematocrit in clinical routine, the calculated difference of 1.4% might substantially influence transfusion triggers depending on the employed method.
Simulation and Verificaiton of Flow in Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica
2005-01-01
Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...
Verification of results of core physics on-line simulation by NGFM code
International Nuclear Information System (INIS)
Zhao Yu; Cao Xinrong; Zhao Qiang
2008-01-01
Nodal Green's Function Method program NGFM/TNGFM has been trans- planted to windows system. The 2-D and 3-D benchmarks have been checked by this program. And the program has been used to check the results of QINSHAN-II reactor simulation. It is proved that the NGFM/TNGFM program is applicable for reactor core physics on-line simulation system. (authors)
Cooperation as a Service in VANET: Implementation and Simulation Results
Directory of Open Access Journals (Sweden)
Hajar Mousannif
2012-01-01
Full Text Available The past decade has witnessed the emergence of Vehicular Ad-hoc Networks (VANET, specializing from the well-known Mobile Ad Hoc Networks (MANET to Vehicle-to-Vehicle (V2V and Vehicle-to-Infrastructure (V2I wireless communications. While the original motivation for Vehicular Networks was to promote traffic safety, recently it has become increasingly obvious that Vehicular Networks open new vistas for Internet access, providing weather or road condition, parking availability, distributed gaming, and advertisement. In previous papers [27,28], we introduced Cooperation as a Service (CaaS; a new service-oriented solution which enables improved and new services for the road users and an optimized use of the road network through vehicle's cooperation and vehicle-to-vehicle communications. The current paper is an extension of the first ones; it describes an improved version of CaaS and provides its full implementation details and simulation results. CaaS structures the network into clusters, and uses Content Based Routing (CBR for intra-cluster communications and DTN (Delay–and disruption-Tolerant Network routing for inter-cluster communications. To show the feasibility of our approach, we implemented and tested CaaS using Opnet modeler software package. Simulation results prove the correctness of our protocol and indicate that CaaS achieves higher performance as compared to an Epidemic approach.
Separation of electron ion ring components (computational simulation and experimental results)
International Nuclear Information System (INIS)
Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.
1978-01-01
The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented
Method for numerical simulation of two-term exponentially correlated colored noise
International Nuclear Information System (INIS)
Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.
2006-01-01
A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications
Benchmarking HRA methods against different NPP simulator data
International Nuclear Information System (INIS)
Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta
2008-01-01
The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described
A particle finite element method for machining simulations
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.
2008-01-01
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...
Particle-transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.
1975-01-01
Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)
Extended post processing for simulation results of FEM synthesized UHF-RFID transponder antennas
Directory of Open Access Journals (Sweden)
R. Herschmann
2007-06-01
Full Text Available The computer aided design process of sophisticated UHF-RFID transponder antennas requires the application of reliable simulation software. This paper describes a Matlab implemented extension of the post processor capabilities of the commercially available three dimensional field simulation programme Ansoft HFSS to compute an accurate solution of the antenna's surface current distribution. The accuracy of the simulated surface currents, which are physically related to the impedance at the feeding point of the antenna, depends on the convergence of the electromagnetic fields inside the simulation volume. The introduced method estimates the overall quality of the simulation results by combining the surface currents with the electromagnetic fields extracted from the field solution of Ansoft HFSS.
Magnetic Compression Experiment at General Fusion with Simulation Results
Dunlea, Carl; Khalzov, Ivan; Hirose, Akira; Xiao, Chijin; Fusion Team, General
2017-10-01
The magnetic compression experiment at GF was a repetitive non-destructive test to study plasma physics applicable to Magnetic Target Fusion compression. A spheromak compact torus (CT) is formed with a co-axial gun into a containment region with an hour-glass shaped inner flux conserver, and an insulating outer wall. External coil currents keep the CT off the outer wall (levitation) and then rapidly compress it inwards. The optimal external coil configuration greatly improved both the levitated CT lifetime and the rate of shots with good compressional flux conservation. As confirmed by spectrometer data, the improved levitation field profile reduced plasma impurity levels by suppressing the interaction between plasma and the insulating outer wall during the formation process. We developed an energy and toroidal flux conserving finite element axisymmetric MHD code to study CT formation and compression. The Braginskii MHD equations with anisotropic heat conduction were implemented. To simulate plasma / insulating wall interaction, we couple the vacuum field solution in the insulating region to the full MHD solution in the remainder of the domain. We see good agreement between simulation and experiment results. Partly funded by NSERC and MITACS Accelerate.
Some results on ethnic conflicts based on evolutionary game simulation
Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin
2014-07-01
The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.
Simulating condensation on microstructured surfaces using Lattice Boltzmann Method
Alexeev, Alexander; Vasyliv, Yaroslav
2017-11-01
We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for
Virtual simulation. First clinical results in patients with prostate cancer
International Nuclear Information System (INIS)
Buchali, A.; Dinges, S.; Koswig, S.; Rosenthal, P.; Salk, S.; Harder, C.; Schlenger, L.; Budach, V.
1998-01-01
Investigation of options of virtual simulation in patients with localized prostate cancer. Twenty-four patients suffering from prostate cancer were virtual simulated. The clinical target volume was contoured and the planning target volume was defined after CT scan. The isocenter of the planning target volume was determined and marked at patient's skin. The precision of patients marking was controlled with conventional simulation after physical radiation treatment planning. Mean differences of the patient's mark revealed between the 2 simulations in all room axes around 1 mm. The organs at risk were visualized in the digital reconstructed radiographs. The precise patient's mark of the isocentre by virtual simulation allows to skip the conventional simulation. The visualisation of organs at risk leeds to an unnecessarity of an application of contrast medium and to a further relieve of the patient. The personal requirement is not higher in virtual simulation than in conventional CT based radiation treatment planning. (orig./MG) [de
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Amyloid oligomer structure characterization from simulations: A general method
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)
2014-03-07
Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.
Numerical Simulation of Plasma Antenna with FDTD Method
International Nuclear Information System (INIS)
Chao, Liang; Yue-Min, Xu; Zhi-Jiang, Wang
2008-01-01
We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconBgurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design
Numerical simulation of plasma antenna with FDTD method
International Nuclear Information System (INIS)
Liang Chao; Xu Yuemin; Wang Zhijiang
2008-01-01
We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconfigurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design. (authors)
Natural tracer test simulation by stochastic particle tracking method
International Nuclear Information System (INIS)
Ackerer, P.; Mose, R.; Semra, K.
1990-01-01
Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)
Simulation of bubble motion under gravity by lattice Boltzmann method
International Nuclear Information System (INIS)
Takada, Naoki; Misawa, Masaki; Tomiyama, Akio; Hosokawa, Shigeo
2001-01-01
We describe the numerical simulation results of bubble motion under gravity by the lattice Boltzmann method (LBM), which assumes that a fluid consists of mesoscopic fluid particles repeating collision and translation and a multiphase interface is reproduced in a self-organizing way by repulsive interaction between different kinds of particles. The purposes in this study are to examine the applicability of LBM to the numerical analysis of bubble motions, and to develop a three-dimensional version of the binary fluid model that introduces a free energy function. We included the buoyancy terms due to the density difference in the lattice Boltzmann equations, and simulated single-and two-bubble motions, setting flow conditions according to the Eoetvoes and Morton numbers. The two-dimensional results by LBM agree with those by the Volume of Fluid method based on the Navier-Stokes equations. The three-dimensional model possesses the surface tension satisfying the Laplace's law, and reproduces the motion of single bubble and the two-bubble interaction of their approach and coalescence in circular tube. There results prove that the buoyancy terms and the 3D model proposed here are suitable, and that LBM is useful for the numerical analysis of bubble motion under gravity. (author)
Research methods of simulate digital compensators and autonomous control systems
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2016-01-01
Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-09-10
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.
Some results of simulation on radiation effects in crystals
International Nuclear Information System (INIS)
Baier, T.; AN SSSR, Novosibirsk
1993-05-01
Simulations concerning radiation in oriented silicon and tungsten crystals of different thicknesses are developed. Conditions are those of experiments done at Kharkov (Ukraine) and Tomsk (Russia) with electron beams in the 1 GeV range. Systematic comparisons between experimental and simulated spectra associated to real spectrum, radiation energy and angular distribution of the photons are developed. The ability of the simulation program to describe crystal effects in the considered energy range is analysed. (author) 11 refs.; 8 figs
Evaluation of full-scope simulator testing methods
Energy Technology Data Exchange (ETDEWEB)
Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.
Evaluation of full-scope simulator testing methods
International Nuclear Information System (INIS)
Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.
Three-dimensional discrete element method simulation of core disking
Wu, Shunchuan; Wu, Haoyan; Kemeny, John
2018-04-01
The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.
Contribution of the ultrasonic simulation to the testing methods qualification process
International Nuclear Information System (INIS)
Le Ber, L.; Calmon, P.; Abittan, E.
2001-01-01
The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)
Computational bone remodelling simulations and comparisons with DEXA results.
Turner, A W L; Gillies, R M; Sekel, R; Morris, P; Bruce, W; Walsh, W R
2005-07-01
Femoral periprosthetic bone loss following total hip replacement is often associated with stress shielding. Extensive bone resorption may lead to implant or bone failure and complicate revision surgery. In this study, an existing strain-adaptive bone remodelling theory was modified and combined with anatomic three-dimensional finite element models to predict alterations in periprosthetic apparent density. The theory incorporated an equivalent strain stimulus and joint and muscle forces from 45% of the gait cycle. Remodelling was simulated for three femoral components with different design philosophies: cobalt-chrome alloy, two-thirds proximally coated; titanium alloy, one-third proximally coated; and a composite of cobalt-chrome surrounded by polyaryletherketone, fully coated. Theoretical bone density changes correlated significantly with clinical densitometry measurements (DEXA) after 2 years across the Gruen zones (R2>0.67, p<0.02), with average differences of less than 5.4%. The results suggest that a large proportion of adaptive bone remodelling changes seen clinically with these implants may be explained by a consistent theory incorporating a purely mechanical stimulus. This theory could be applied to pre-clinical testing of new implants, investigation of design modifications, and patient-specific implant selection.
LANGMUIR WAVE DECAY IN INHOMOGENEOUS SOLAR WIND PLASMAS: SIMULATION RESULTS
Energy Technology Data Exchange (ETDEWEB)
Krafft, C. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, F-91128 Palaiseau Cedex (France); Volokitin, A. S. [IZMIRAN, Troitsk, 142190, Moscow (Russian Federation); Krasnoselskikh, V. V., E-mail: catherine.krafft@u-psud.fr [Laboratoire de Physique et Chimie de l’Environnement et de l’Espace, 3A Av. de la Recherche Scientifique, F-45071 Orléans Cedex 2 (France)
2015-08-20
Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.
Research on Monte Carlo simulation method of industry CT system
International Nuclear Information System (INIS)
Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan
2010-01-01
There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)
A simple method for potential flow simulation of cascades
Indian Academy of Sciences (India)
vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Steam generator tube rupture simulation using extended finite element method
Energy Technology Data Exchange (ETDEWEB)
Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken
2016-08-15
Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.
Steam generator tube rupture simulation using extended finite element method
International Nuclear Information System (INIS)
Mohanty, Subhasish; Majumdar, Saurin; Natesan, Ken
2016-01-01
Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.
International Nuclear Information System (INIS)
Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing
2017-01-01
Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of
Simulation methods of nuclear electromagnetic pulse effects in integrated circuits
International Nuclear Information System (INIS)
Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen
2013-01-01
In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
Real time simulation method for fast breeder reactors dynamics
International Nuclear Information System (INIS)
Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.
1985-01-01
The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)
Simulation of plume dynamics by the Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2017-09-01
The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.
A tool for simulating parallel branch-and-bound methods
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
A tool for simulating parallel branch-and-bound methods
Directory of Open Access Journals (Sweden)
Golubeva Yana
2016-01-01
Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold
International Nuclear Information System (INIS)
Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang
2016-01-01
Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)
Simulation of the 2-dimensional Drude’s model using molecular dynamics method
Energy Technology Data Exchange (ETDEWEB)
Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra [Theoretical High Energy Physics and Instrumentation Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Wahyoedi, Seramika Ari; Viridi, Sparisoma, E-mail: viridi@cphys.fi.itb.ac.id [Nuclear and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)
2015-04-16
In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.
Application of the maximum entropy method to dynamical fermion simulations
Clowser, Jonathan
This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.
FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2
Energy Technology Data Exchange (ETDEWEB)
David Sloan; Woodrow Fiveland
2003-10-15
The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle
Plasma simulations using the Car-Parrinello method
International Nuclear Information System (INIS)
Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.
1990-01-01
A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma
A direct simulation method for flows with suspended paramagnetic particles
Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.
2008-01-01
A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a
DRK methods for time-domain oscillator simulation
Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.
The afforestation problem: a heuristic method based on simulated annealing
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1992-01-01
This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....
Multilevel panel method for wind turbine rotor flow simulations
van Garrel, Arne
2016-01-01
Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering
Clinical simulation as an evaluation method in health informatics
DEFF Research Database (Denmark)
Jensen, Sanne
2016-01-01
Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...
Jia, Shouqing; La, Dongsheng; Ma, Xuelian
2018-04-01
The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.
Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method
Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong
2018-06-01
In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.
Study of Flapping Flight Using Discrete Vortex Method Based Simulations
Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.
2013-12-01
In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.
Simulation of galvanic corrosion using boundary element method
International Nuclear Information System (INIS)
Zaifol Samsu; Muhamad Daud; Siti Radiah Mohd Kamaruddin; Nur Ubaidah Saidin; Abdul Aziz Mohamed; Mohd Saari Ripin; Rusni Rejab; Mohd Shariff Sattar
2011-01-01
Boundary element method (BEM) is a numerical technique that used for modeling infinite domain as is the case for galvanic corrosion analysis. The use of boundary element analysis system (BEASY) has allowed cathodic protection (CP) interference to be assessed in terms of the normal current density, which is directly proportional to the corrosion rate. This paper was present the analysis of the galvanic corrosion between Aluminium and Carbon Steel in natural sea water. The result of experimental was validated with computer simulation like BEASY program. Finally, it can conclude that the BEASY software is a very helpful tool for future planning before installing any structure, where it gives the possible CP interference on any nearby unprotected metallic structure. (Author)
Architecture oriented modeling and simulation method for combat mission profile
Directory of Open Access Journals (Sweden)
CHEN Xia
2017-05-01
Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.
Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems
Directory of Open Access Journals (Sweden)
Tao Ren
2012-01-01
Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.
RF feedback simulation results for PEP-II
International Nuclear Information System (INIS)
Tighe, R.; Corredoura, P.
1995-06-01
A model of the RF feedback system for PEP-II has been developed to provide time-domain simulation and frequency-domain analysis of the complete system. The model includes the longitudinal beam dynamics, cavity fundamental resonance, feedback loops, and the nonlinear klystron operating near saturation. Transients from an ion clearing gap and a reference phase modulation from the longitudinal feedback system are also studied. Growth rates are predicted and overall system stability examined
Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.
Simulation of the acoustic wave propagation using a meshless method
Directory of Open Access Journals (Sweden)
Bajko J.
2017-01-01
Full Text Available This paper presents numerical simulations of the acoustic wave propagation phenomenon modelled via Linearized Euler equations. A meshless method based on collocation of the strong form of the equation system is adopted. Moreover, the Weighted least squares method is used for local approximation of derivatives as well as stabilization technique in a form of spatial ltering. The accuracy and robustness of the method is examined on several benchmark problems.
Numerical simulation methods for wave propagation through optical waveguides
International Nuclear Information System (INIS)
Sharma, A.
1993-01-01
The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs
A new method to estimate heat source parameters in gas metal arc welding simulation process
International Nuclear Information System (INIS)
Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi
2014-01-01
Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data
A Finite Element Method for Simulation of Compressible Cavitating Flows
Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad
2016-11-01
This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.
Precision of a FDTD method to simulate cold magnetized plasmas
International Nuclear Information System (INIS)
Pavlenko, I.V.; Melnyk, D.A.; Prokaieva, A.O.; Girka, I.O.
2014-01-01
The finite difference time domain (FDTD) method is applied to describe the propagation of the transverse electromagnetic waves through the magnetized plasmas. The numerical dispersion relation is obtained in a cold plasma approximation. The accuracy of the numerical dispersion is calculated as a function of the frequency of the launched wave and time step of the numerical grid. It is shown that the numerical method does not reproduce the analytical results near the plasma resonances for any chosen value of time step if there is not a dissipation mechanism in the system. It means that FDTD method cannot be applied straightforward to simulate the problems where the plasma resonances play a key role (for example, the mode conversion problems). But the accuracy of the numerical scheme can be improved by introducing some artificial damping of the plasma currents. Although part of the wave power is lost in the system in this case but the numerical scheme describes the wave processes in an agreement with analytical predictions.
Advance in research on aerosol deposition simulation methods
International Nuclear Information System (INIS)
Liu Keyang; Li Jingsong
2011-01-01
A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)
Finite element method for simulation of the semiconductor devices
International Nuclear Information System (INIS)
Zikatanov, L.T.; Kaschiev, M.S.
1991-01-01
An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs
Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji
Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.
Simulation of neutral gas flow in a tokamak divertor using the Direct Simulation Monte Carlo method
International Nuclear Information System (INIS)
Gleason-González, Cristian; Varoutis, Stylianos; Hauer, Volker; Day, Christian
2014-01-01
Highlights: • Subdivertor gas flows calculations in tokamaks by coupling the B2-EIRENE and DSMC method. • The results include pressure, temperature, bulk velocity and particle fluxes in the subdivertor. • Gas recirculation effect towards the plasma chamber through the vertical targets is found. • Comparison between DSMC and the ITERVAC code reveals a very good agreement. - Abstract: This paper presents a new innovative scientific and engineering approach for describing sub-divertor gas flows of fusion devices by coupling the B2-EIRENE (SOLPS) code and the Direct Simulation Monte Carlo (DSMC) method. The present study exemplifies this with a computational investigation of neutral gas flow in the ITER's sub-divertor region. The numerical results include the flow fields and contours of the overall quantities of practical interest such as the pressure, the temperature and the bulk velocity assuming helium as model gas. Moreover, the study unravels the gas recirculation effect located behind the vertical targets, viz. neutral particles flowing towards the plasma chamber. Comparison between calculations performed by the DSMC method and the ITERVAC code reveals a very good agreement along the main sub-divertor ducts
Study on simulation methods of atrium building cooling load in hot and humid regions
Energy Technology Data Exchange (ETDEWEB)
Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)
2010-10-15
In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)
Direct drive: Simulations and results from the National Ignition Facility
Energy Technology Data Exchange (ETDEWEB)
Radha, P. B., E-mail: rbah@lle.rochester.edu; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); and others
2016-05-15
Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Integrated visualization of simulation results and experimental devices in virtual-reality space
International Nuclear Information System (INIS)
Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi
2011-01-01
We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)
Adaptive and dynamic meshing methods for numerical simulations
Acikgoz, Nazmiye
-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations
3D simulation of friction stir welding based on movable cellular automaton method
Eremina, Galina M.
2017-12-01
The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.
Biasing transition rate method based on direct MC simulation for probabilistic safety assessment
Institute of Scientific and Technical Information of China (English)
Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang
2017-01-01
Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
Directory of Open Access Journals (Sweden)
J. Tao
2012-09-01
Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.
The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method
Directory of Open Access Journals (Sweden)
Dipakkumar Gohil
2012-06-01
Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.
How do rigid-lid assumption affect LES simulation results at high Reynolds flows?
Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration
2017-11-01
This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.
COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS
International Nuclear Information System (INIS)
Bekki, Kenji
2015-01-01
We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H 2 ) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H 2 gas fraction (f H 2 ), and gas-phase chemical abundances (e.g., A O = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f H 2 can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A O -D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A O is smaller. The simulated galaxies with larger total dust masses show larger H 2 and stellar masses and higher f H 2 . Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z
COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS
Energy Technology Data Exchange (ETDEWEB)
Bekki, Kenji [ICRAR, M468, The University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009 (Australia)
2015-02-01
We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H{sub 2}) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H{sub 2} gas fraction (f{sub H{sub 2}}), and gas-phase chemical abundances (e.g., A {sub O} = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f{sub H{sub 2}} can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A {sub O}-D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A {sub O} is smaller. The simulated galaxies with larger total dust masses show larger H{sub 2} and stellar masses and higher f{sub H{sub 2}}. Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.
On Partitioned Simulation of Electrical Circuits using Dynamic Iteration Methods
Ebert, Falk
2008-01-01
Im Rahmen dieser Arbeit wird die partitionierte Simulation elektrischer Schaltkreise untersucht. Hierbei handelt es sich um eine Technik, verschiedene Teile eines Schaltkreises auf unterschiedliche Weise numerisch zu behandeln um eine Simulation für den Gesamtkreis zu erhalten. Dabei wird besonderes Augenmerk auf zwei Dinge gelegt. Zum einen sollen sämtliche analytischen Resultate eine graphentheoretische Interpretation zulassen. Diese Bedingung resultiert daraus, dass Schaltkreisgleichungen ...
Simulation of quantum systems by the tomography Monte Carlo method
International Nuclear Information System (INIS)
Bogdanov, Yu I
2007-01-01
A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated. (fifth seminar in memory of d.n. klyshko)
Vectorization of a particle simulation method for hypersonic rarefied flow
Mcdonald, Jeffrey D.; Baganoff, Donald
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
A simulation based engineering method to support HAZOP studies
DEFF Research Database (Denmark)
Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge
2012-01-01
the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...
Vectorization of a particle simulation method for hypersonic rarefied flow
International Nuclear Information System (INIS)
Mcdonald, J.D.; Baganoff, D.
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
Kinematics and simulation methods to determine the target thickness
International Nuclear Information System (INIS)
Rosales, P.; Aguilar, E.F.; Martinez Q, E.
2001-01-01
Making use of the kinematics and of the particles energy loss two methods for calculating the thickness of a target are described. Through a computer program and other of simulation in which parameters obtained experimentally are used. Several values for a 12 C target thickness were obtained. It is presented a comparison of the obtained values with each one of the used programs. (Author)
STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD
塩澤, 孝哉
2015-01-01
In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...
A method of simulating and visualizing nuclear reactions
International Nuclear Information System (INIS)
Atwood, C.H.; Paul, K.M.
1994-01-01
Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown
Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur
2017-05-01
In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.
Hybrid vortex simulations of wind turbines using a three-dimensional viscous-inviscid panel method
DEFF Research Database (Denmark)
Ramos García, Néstor; Hejlesen, Mads Mølholm; Sørensen, Jens Nørkær
2017-01-01
adirect calculation, whereas the contribution from the large downstream wake is calculated using a mesh-based method. Thehybrid method is first validated in detail against the well-known MEXICO experiment, using the direct filament method asa comparison. The second part of the validation includes a study......A hybrid filament-mesh vortex method is proposed and validated to predict the aerodynamic performance of wind turbinerotors and to simulate the resulting wake. Its novelty consists of using a hybrid method to accurately simulate the wakedownstream of the wind turbine while reducing...
Some recent developments of the immersed interface method for flow simulation
Xu, Sheng
2017-11-01
The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.
Polymers undergoing inhomogeneous adsorption: exact results and Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Iliev, G K [Department of Mathematics, University of Melbourne, Parkville, Victoria (Australia); Orlandini, E [Dipartimento di Fisica, CNISM, Universita di Padova, Via Marzolo 8, 35131 Padova (Italy); Whittington, S G, E-mail: giliev@yorku.ca [Department of Chemistry, University of Toronto, Toronto (Canada)
2011-10-07
We consider several types of inhomogeneous polymer adsorption. In each case, the inhomogeneity is regular and resides in the surface, in the polymer or in both. We consider two different polymer models: a directed walk model that can be solved exactly and a self-avoiding walk model which we investigate using Monte Carlo methods. In each case, we compute the phase diagram. We compare and contrast the phase diagrams and give qualitative arguments about their forms. (paper)
Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations
DEFF Research Database (Denmark)
Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht
2011-01-01
Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... region method reduces the computation time mainly by skipping stability analysis for a large portion of compositions in the single phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be employed with initial estimates from the previous step. The CSAT method saves...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...
SiO2-Ta2O5 sputtering yields: simulated and experimental results
International Nuclear Information System (INIS)
Vireton, E.; Ganau, P.; Mackowski, J.M.; Michel, C.; Pinard, L.; Remillieux, A.
1994-09-01
To improve mirrors coating, we have modeled sputtering of binary oxide targets using TRIM code. First, we have proposed a method to calculate TRIM input parameters using on the one hand thermodynamic cycle and on the other hand Malherbe's results. Secondly, an iterative processing has provided for oxide steady targets caused by ionic bombardment. Thirdly, we have exposed a model to get experimental sputtering yields. Fourthly, for (Ar - SiO 2 ) pair, we have determined that steady target is a silica one. A good agreement between simulated and experimental yields versus ion incident angle has been found. For (Ar - Ta 2 O 5 ) pair, we have to introduce preferential sputtering concept to explain discrepancy between simulation and experiment. In this case, steady target is tantalum monoxide. For (Ar - Ta(+O 2 ) pair, tantalum sputtered by argon ions in reactive oxygen atmosphere, we have to take into account new concept of oxidation stimulated by ion beam. We have supposed that tantalum target becomes a Ta 2 O 5 one in reactive oxygen atmosphere. Then, following mechanism is similar to previous pair. We have obtained steady target of tantalum monoxide too. Comparison between simulated and experimental sputtering yields versus ion incident angle has given very good agreement. By simulation, we have found that tantalum monoxide target has at least 15 angstrom thickness. Those results are compatible with Malherbe's and Taglauer's ones. (authors)
Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design
Ang, Chee Siang; Zaphiris, Panayiotis
We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
Gradient augmented level set method for phase change simulations
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Quantitative evaluation for training results of nuclear plant operator on BWR simulator
International Nuclear Information System (INIS)
Sato, Takao; Sato, Tatsuaki; Onishi, Hiroshi; Miyakita, Kohji; Mizuno, Toshiyuki
1985-01-01
Recently, the reliability of neclear power plants has largely risen, and the abnormal phenomena in the actual plants are rarely encountered. Therefore, the training using simulators becomes more and more important. In BWR Operator Training Center Corp., the training of the operators of BWR power plants has been continued for about ten years using a simulator having the nearly same function as the actual plants. The recent high capacity ratio of nuclear power plants has been mostly supported by excellent operators trained in this way. Taking the opportunity of the start of operation of No.2 simulator, effort has been exerted to quantitatively grasp the effect of training and to heighten the quality of training. The outline of seven training courses is shown. The technical ability required for operators, the items of quantifying the effect of training, that is, operational errors and the time required for operation, the method of quantifying, the method of collecting the data and the results of the application to the actual training are described. It was found that this method is suitable to quantify the effect of training. (Kako, I.)
Techniques and results of tokamak-edge simulation
International Nuclear Information System (INIS)
Smith, G.R.; Brown, P.N.; Campbell, R.B.; Knoll, D.A.; McHugh, P.R.; Rensink, M.E.; Rognlien, T.D.
1995-01-01
This paper describes recent development of the UEDGE code in three important areas. (1) Non-orthogonal grids allow accurate treatment of experimental geometries in which divertor plates intersect flux surfaces at oblique angles. (2) Radiating impurities are included by means of one or more continuity equations that describe transport and sources and sinks due to ionization and recombination processes. (3) Advanced iterative methods that reduce storage and execution time allow us to find fully converged solutions of larger problems (i.e., finer grids). Sample calculations are presented to illustrate these developments. ((orig.))
Techniques and results of tokamak-edge simulation
International Nuclear Information System (INIS)
Smith, G.R.; Brown, P.N.; Rensink, M.E.; Rognlien, T.D.; Campbell, R.B.; Knoll, D.A.; McHugh, P.R.
1994-01-01
This paper describes recent development of the UEDGE code in three important areas. (1) Non-orthogonal grids allow accurate treatment of experimental geometries in which divertor plates intersect flux surfaces at oblique angles. (2) Radating impurities are included by means of one or more continuity equations that describe transport and sources, and sinks due to ionization and recombination processes. (3) Advanced iterative methods that reduce storage and execution time allow us to find fully converged solutions of larger problems (i.e., finer grids). Sample calculations are presented to illustrate these development
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
A virtual source method for Monte Carlo simulation of Gamma Knife Model C
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2016-05-15
The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.
A virtual source method for Monte Carlo simulation of Gamma Knife Model C
International Nuclear Information System (INIS)
Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai
2016-01-01
The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
International Nuclear Information System (INIS)
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-01
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Using relational databases to collect and store discrete-event simulation results
DEFF Research Database (Denmark)
Poderys, Justas; Soler, José
2016-01-01
, export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...
Stable water isotope simulation by current land-surface schemes:Results of IPILPS phase 1
Energy Technology Data Exchange (ETDEWEB)
Henderson-Sellers, A.; Fischer, M.; Aleinov, I.; McGuffie, K.; Riley, W.J.; Schmidt, G.A.; Sturm, K.; Yoshimura, K.; Irannejad, P.
2005-10-31
Phase 1 of isotopes in the Project for Intercomparison of Land-surface Parameterization Schemes (iPILPS) compares the simulation of two stable water isotopologues ({sup 1}H{sub 2} {sup 18}O and {sup 1}H{sup 2}H{sup 16}O) at the land-atmosphere interface. The simulations are off-line, with forcing from an isotopically enabled regional model for three locations selected to offer contrasting climates and ecotypes: an evergreen tropical forest, a sclerophyll eucalypt forest and a mixed deciduous wood. Here we report on the experimental framework, the quality control undertaken on the simulation results and the method of intercomparisons employed. The small number of available isotopically-enabled land-surface schemes (ILSSs) limits the drawing of strong conclusions but, despite this, there is shown to be benefit in undertaking this type of isotopic intercomparison. Although validation of isotopic simulations at the land surface must await more, and much more complete, observational campaigns, we find that the empirically-based Craig-Gordon parameterization (of isotopic fractionation during evaporation) gives adequately realistic isotopic simulations when incorporated in a wide range of land-surface codes. By introducing two new tools for understanding isotopic variability from the land surface, the Isotope Transfer Function and the iPILPS plot, we show that different hydrological parameterizations cause very different isotopic responses. We show that ILSS-simulated isotopic equilibrium is independent of the total water and energy budget (with respect to both equilibration time and state), but interestingly the partitioning of available energy and water is a function of the models' complexity.
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Modified network simulation model with token method of bus access
Directory of Open Access Journals (Sweden)
L.V. Stribulevich
2013-08-01
Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Directory of Open Access Journals (Sweden)
Shengwei Li
2017-01-01
Full Text Available To study the micro/mesomechanical behaviors of heterogeneous geomaterials, a multiscale simulation method that combines molecular simulation at the microscale, a mesoscale analysis of polished slices, and finite element numerical simulation is proposed. By processing the mesostructure images obtained from analyzing the polished slices of heterogeneous geomaterials and mapping them onto finite element meshes, a numerical model that more accurately reflects the mesostructures of heterogeneous geomaterials was established by combining the results with the microscale mechanical properties of geomaterials obtained from the molecular simulation. This model was then used to analyze the mechanical behaviors of heterogeneous materials. Because kernstone is a typical heterogeneous material that comprises many types of mineral crystals, it was used for the micro/mesoscale mechanical behavior analysis in this paper using the proposed method. The results suggest that the proposed method can be used to accurately and effectively study the mechanical behaviors of heterogeneous geomaterials at the micro/mesoscales.
Climate Action Gaming Experiment: Methods and Example Results
Directory of Open Access Journals (Sweden)
Clifford Singer
2015-09-01
Full Text Available An exercise has been prepared and executed to simulate international interactions on policies related to greenhouse gases and global albedo management. Simulation participants are each assigned one of six regions that together contain all of the countries in the world. Participants make quinquennial policy decisions on greenhouse gas emissions, recapture of CO2 from the atmosphere, and/or modification of the global albedo. Costs of climate change and of implementing policy decisions impact each region’s gross domestic product. Participants are tasked with maximizing economic benefits to their region while nearly stabilizing atmospheric CO2 concentrations by the end of the simulation in Julian year 2195. Results are shown where regions most adversely affected by effects of greenhouse gas emissions resort to increases in the earth’s albedo to reduce net solar insolation. These actions induce temperate region countries to reduce net greenhouse gas emissions. An example outcome is a trajectory to the year 2195 of atmospheric greenhouse emissions and concentrations, sea level, and global average temperature.
Holistic simulation of geotechnical installation processes theoretical results and applications
2017-01-01
This book provides recent developments and improvements in the modeling as well as application examples and is a complementary work to the previous Lecture Notes Vols. 77 and 80. It summarizes the fundamental work from scientists dealing with the development of constitutive models for soils, especially cyclic loading with special attention to the numerical implementation. In this volume the neo-hypoplasticity and the ISA (intergranular strain anisotropy) model in their extended version are presented. Furthermore, new contact elements with non-linear constitutive material laws and examples for their applications are given. Comparisons between the experimental and the numerical results show the effectiveness and the drawbacks and provide a useful and comprehensive pool for all the constitutive model developers and scientists in geotechnical engineering, who like to prove the soundness of new approaches.
Numerical Simulation of Tubular Pumping Systems with Different Regulation Methods
Zhu, Honggeng; Zhang, Rentian; Deng, Dongsheng; Feng, Xusong; Yao, Linbi
2010-06-01
Since the flow in tubular pumping systems is basically along axial direction and passes symmetrically through the impeller, most satisfying the basic hypotheses in the design of impeller and having higher pumping system efficiency in comparison with vertical pumping system, they are being widely applied to low-head pumping engineering. In a pumping station, the fluctuation of water levels in the sump and discharge pool is most common and at most time the pumping system runs under off-design conditions. Hence, the operation of pump has to be flexibly regulated to meet the needs of flow rates, and the selection of regulation method is as important as that of pump to reduce operation cost and achieve economic operation. In this paper, the three dimensional time-averaged Navier-Stokes equations are closed by RNG κ-ɛ turbulent model, and two tubular pumping systems with different regulation methods, equipped with the same pump model but with different designed system structures, are numerically simulated respectively to predict the pumping system performances and analyze the influence of regulation device and help designers make final decision in the selection of design schemes. The computed results indicate that the pumping system with blade-adjusting device needs longer suction box, and the increased hydraulic loss will lower the pumping system efficiency in the order of 1.5%. The pumping system with permanent magnet motor, by means of variable speed regulation, obtains higher system efficiency partly for shorter suction box and partly for different structure design. Nowadays, the varied speed regulation is realized by varied frequency device, the energy consumption of which is about 3˜4% of output power of the motor. Hence, when the efficiency of variable frequency device is considered, the total pumping system efficiency will probably be lower.
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline
Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji
2018-02-01
This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.
Directory of Open Access Journals (Sweden)
Željko Gavrić
2018-01-01
Full Text Available Wireless sensor networks are now used in various fields. The information transmitted in the wireless sensor networks is very sensitive, so the security issue is very important. DOS (denial of service attacks are a fundamental threat to the functioning of wireless sensor networks. This paper describes some of the most common DOS attacks and potential methods of protection against them. The case study shows one of the most frequent attacks on wireless sensor networks – the interference attack. In the introduction of this paper authors assume that the attack interference can cause significant obstruction of wireless sensor networks. This assumption has been proved in the case study through simulation scenario and simulation results.
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
International Nuclear Information System (INIS)
Liu Jizhi; Chen Xingbi
2009-01-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
Energy Technology Data Exchange (ETDEWEB)
Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)
2009-12-15
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
EQUITY SHARES EQUATING THE RESULTS OF FCFF AND FCFE METHODS
Directory of Open Access Journals (Sweden)
Bartłomiej Cegłowski
2012-06-01
Full Text Available The aim of the article is to present the method of establishing equity shares in weight average cost of capital (WACC, in which the value of loan capital results from the fixed assumptions accepted in the financial plan (for example a schedule of loan repayment and own equity is evaluated by means of a discount method. The described method causes that, regardless of whether cash flows are calculated as FCFF or FCFE, the result of the company valuation will be identical.
Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....
A computer method for simulating the decay of radon daughters
International Nuclear Information System (INIS)
Hartley, B.M.
1988-01-01
The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure
Analysis of Monte Carlo methods for the simulation of photon transport
International Nuclear Information System (INIS)
Carlsson, G.A.; Kusoffsky, L.
1975-01-01
In connection with the transport of low-energy photons (30 - 140 keV) through layers of water of different thicknesses, various aspects of Monte Carlo methods are examined in order to improve their effectivity (to produce statistically more reliable results with shorter computer times) and to bridge the gap between more physical methods and more mathematical ones. The calculations are compared with results of experiments involving the simulation of photon transport, using direct methods and collision density ones (J.S.)
New method of scoliosis assessment: preliminary results using computerized photogrammetry.
Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio
2011-09-01
A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.
Radon movement simulation in overburden by the 'Scattered Packet Method'
International Nuclear Information System (INIS)
Marah, H.; Sabir, A.; Hlou, L.; Tayebi, M.
1998-01-01
The analysis of Radon ( 222 Rn) movement in overburden needs the resolution of the General Equation of Transport in porous medium, involving diffusion and convection. Generally this equation was derived and solved analytically. The 'Scattered Packed Method' is a recent mathematical method of resolution, initially developed for the electrons movements in the semiconductors studies. In this paper, we have adapted this method to simulate radon emanation in porous medium. The keys parameters are the radon concentration at the source, the diffusion coefficient, and the geometry. To show the efficiency of this method, several cases of increasing complexity are considered. This model allows to follow the migration, in the time and space, of radon produced as a function of the characteristics of the studied site. Forty soil radon measurements were taken from a North Moroccan fault. Forward modeling of the radon anomalies produces satisfactory fits of the observed data and allows the overburden thickness determination. (author)
Evaluation of null-point detection methods on simulation data
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Numerical simulation of electromagnetic wave propagation using time domain meshless method
International Nuclear Information System (INIS)
Ikuno, Soichiro; Fujita, Yoshihisa; Itoh, Taku; Nakata, Susumu; Nakamura, Hiroaki; Kamitani, Atsushi
2012-01-01
The electromagnetic wave propagation in various shaped wave guide is simulated by using meshless time domain method (MTDM). Generally, Finite Differential Time Domain (FDTD) method is applied for electromagnetic wave propagation simulation. However, the numerical domain should be divided into rectangle meshes if FDTD method is applied for the simulation. On the other hand, the node disposition of MTDM can easily describe the structure of arbitrary shaped wave guide. This is the large advantage of the meshless time domain method. The results of computations show that the damping rate is stably calculated in case with R < 0.03, where R denotes a support radius of the weight function for the shape function. And the results indicate that the support radius R of the weight functions should be selected small, and monomials must be used for calculating the shape functions. (author)
Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations
International Nuclear Information System (INIS)
Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.
2001-01-01
The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)
Multiscale Lattice Boltzmann method for flow simulations in highly heterogenous porous media
Li, Jun
2013-01-01
A lattice Boltzmann method (LBM) for flow simulations in highly heterogeneous porous media at both pore and Darcy scales is proposed in the paper. In the pore scale simulations, flow of two phases (e.g., oil and gas) or two immiscible fluids (e.g., water and oil) are modeled using cohesive or repulsive forces, respectively. The relative permeability can be computed using pore-scale simulations and seamlessly applied for intermediate and Darcy-scale simulations. A multiscale LBM that can reduce the computational complexity of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with the averaged results obtained using fine grid.
DEFF Research Database (Denmark)
Abdallah, Imad; Sudret, Bruno; Lataniotis, Christos
2015-01-01
Fusing predictions from multiple simulators in the early stages of the conceptual design of a wind turbine results in reduction in model uncertainty and risk mitigation. Aero-servo-elastic is a term that refers to the coupling of wind inflow, aerodynamics, structural dynamics and controls. Fusing...... the response data from multiple aero-servo-elastic simulators could provide better predictive ability than using any single simulator. The co-Kriging approach to fuse information from multifidelity aero-servo-elastic simulators is presented. We illustrate the co-Kriging approach to fuse the extreme flapwise...... bending moment at the blade root of a large wind turbine as a function of wind speed, turbulence and shear exponent in the presence of model uncertainty and non-stationary noise in the output. The extreme responses are obtained by two widely accepted numerical aero-servo-elastic simulators, FAST...
Utilisation of simulation in industrial design and resulting business opportunities (SISU) - MASIT18
Energy Technology Data Exchange (ETDEWEB)
Olin, M.; Leppaevuori, J.; Manninen, J. (VTT Technical Research Centre of Finland, Espoo (Finland)); Valli, A.; Hasari, H.; Koistinen, A.; Leppaenen, S. (Helsinki Polytechnic Stadia, City of Helsinki, Helsinki (Finland)); Lahti, S. (EVTEK University of Applied Sciences, Vantaa (Finland))
2008-07-01
In the SISU project, over 10 case studies are carried out in many different fields and applications. Results and experience of developing simulation applications have started to accumulate. One of the most important results this far is that there are many common features, both good and bad, between our test cases. Simulation is a fast, reliable, and often low risk method of studying different systems and processes. On the other hand, many applications need very expensive licences, plenty of parametric data and highly specialised knowledge in order to produce really valuable results. Industrial partners are acting like real customers in the case studies. We hope that this methodology will help us to answer our main question: how do we create a value chain from model development via model application for end users? The best thing to happen will be if partners learn to apply simulation productively. Other scientists and companies will follow, and new value chains will mushroom. In the case study of Mamec and EVTEK - Mixing model - the aim is to develop a fluid mechanical model for a mixing chamber. This study is similar to the preceding case of Watrec. In this study, the main problems have been in material properties area, because of non-Newtonian fluids and multiphase flows. Material property parameters of the non-Newtonian power law have been defined and flow field simulations have started. In the case study of Fortum and EVTEK - MDR - Measurement data reconciliation - the aim is to apply MDR in a power plant environment and study the possibility of developing a commercial additional tool for power plant simulation through the well-proven MDR technique based on linear filtering theory. The MDR method has been applied, for example, to energy and chemical processes. MDR is closely connected with system maintenance, simulation pre-processing and process diagnostics. Experimental work has proceeded from simple unit processes to large and complicated process systems. One
Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method
International Nuclear Information System (INIS)
Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum
2011-01-01
In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used
DEFF Research Database (Denmark)
Cherchi, Elisabetta; Guevara, Cristian
2012-01-01
with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...
A fast mollified impulse method for biomolecular atomistic simulations
Energy Technology Data Exchange (ETDEWEB)
Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)
2017-03-15
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.
Numerical method for IR background and clutter simulation
Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio
1997-06-01
The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Simulation for light extraction in light emitting diode using finite domain time difference method
International Nuclear Information System (INIS)
Hong, Jun Hee; Park, Si Hyun
2008-01-01
InGaN based LEDs are indispensable to traffic light, full color displays, back lights in liquid crystals, and general lighting. The demand for high efficiency LEDs is on the increase. Recently we have reported the improvement of the light extraction efficiency of InGaN based LED. In this paper we show suitable a three dimensional (3 D)FDTD simulation method for LED simulation and we apply our FDTD simulation to our PNS LED structures, comparing the simulation results with the experimental results. For real FDTD simulation, we first must consider the spatial and temporal grid size. In order to obtain an accurate result, the spatial grid size must be so small that the feature of the field can be resolved. We computed the field power at each time at the surface 0.3mm away from the surface between GaN and air and integrate over surface. The calculations were conducted for the PNS LEDs employing the different height of SiO_2 columns, that is, h=160nm, h=350nm, h=550nm, h=750nm, and h=950nm. Simulation results according to different height is shown in Fig. 1(a,b). All simulation curves follow rough trend that it increases with column height and reaches the maximum at about 600nm height and then decreases with height. And this is a consistent with the trend from our experiments. Our FDTD simulation gives a possibility for design of LED structures of high extraction efficiency
Limitations in simulator time-based human reliability analysis methods
International Nuclear Information System (INIS)
Wreathall, J.
1989-01-01
Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical
Convergence results for a class of abstract continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2004-03-01
Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.
Hardware-in-the-loop grid simulator system and method
Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos
2017-05-16
A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.
Visual Display of Scientific Studies, Methods, and Results
Saltus, R. W.; Fedi, M.
2015-12-01
The need for efficient and effective communication of scientific ideas becomes more urgent each year.A growing number of societal and economic issues are tied to matters of science - e.g., climate change, natural resource availability, and public health. Societal and political debate should be grounded in a general understanding of scientific work in relevant fields. It is difficult for many participants in these debates to access science directly because the formal method for scientific documentation and dissemination is the journal paper, generally written for a highly technical and specialized audience. Journal papers are very effective and important for documentation of scientific results and are essential to the requirements of science to produce citable and repeatable results. However, journal papers are not effective at providing a quick and intuitive summary useful for public debate. Just as quantitative data are generally best viewed in graphic form, we propose that scientific studies also can benefit from visual summary and display. We explore the use of existing methods for diagramming logical connections and dependencies, such as Venn diagrams, mind maps, flow charts, etc., for rapidly and intuitively communicating the methods and results of scientific studies. We also discuss a method, specifically tailored to summarizing scientific papers that we introduced last year at AGU. Our method diagrams the relative importance and connections between data, methods/models, results/ideas, and implications/importance using a single-page format with connected elements in these four categories. Within each category (e.g., data) the spatial location of individual elements (e.g., seismic, topographic, gravity) indicates relative novelty (e.g., are these new data?) and importance (e.g., how critical are these data to the results of the paper?). The goal is to find ways to rapidly and intuitively share both the results and the process of science, both for communication
Simulation methods for multiperiodic and aperiodic nanostructured dielectric waveguides
DEFF Research Database (Denmark)
Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina
2017-01-01
on Rudin–Shapiro, Fibonacci, and Thue–Morse binary sequences. The near-field and far-field properties are computed employing the finite-element method (FEM), the finite-difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). The results show that all three methods...
Energy Technology Data Exchange (ETDEWEB)
Schultz, Marcelo [Inspection Department, Rio de Janeiro Refinery - REDUC, Petrobras, Rio de Janeiro (Brazil); Brasil, Simone L.D.C. [Chemistry School, Federal University of Rio de Janeiro, UFRJ, Rio de Janeiro (Brazil); Baptista, Walmar [Corrosion Department, Research Centre - CENPES, Petrobras (Brazil); Miranda, Luiz de [Materials and Metallurgical Engineering Program, COPPE, UFRJ, Rio de Janeiro (Brazil); Brito, Rosane F. [Corrosion Department, Research Centre, CENPES, Petrobras, Rio de Janeiro (Brazil)
2004-07-01
The deterioration history of Above ground Storage Tanks (AST) of Petrobras' refineries - shows that the great incidence of corrosion in the AST bottom is at the external side. This is a problem in the disposability of storage crude oil and other final products. At this refinery, all AST's are built over a concrete base with a lot of pile to support the structure and distribute the charge homogeneously. Because of this it is very difficult to use cathodic protection as an anti-corrosive method for each one of these tanks. This work presents an alternative cathodic protection system to protect the external side of the tank bottom using a new metallic bottom, placed at different distance from the original one. The space between the two bottoms was filled with one of two kinds of soils, sand or clay, more conductive than the concrete. Using a prototype tank it was studied the potential distributions over the new tank bottom for different system parameters, as soil resistivity, number and position of anodes localized in the old bottom. These experimental results were compared to numerical simulations, carried out using a software based on the Boundary Element Method. The computer simulation validates this protection method, confirming to be a very useful tool to define the optimized cathodic protection system configuration. (authors)
Directory of Open Access Journals (Sweden)
Borgert Jörn
2011-06-01
Full Text Available Abstract Background Magnetic Particle Imaging is a novel method for medical imaging. It can be used to measure the local concentration of a tracer material based on iron oxide nanoparticles. While the resulting images show the distribution of the tracer material in phantoms or anatomic structures of subjects under examination, no information about the tissue is being acquired. To expand Magnetic Particle Imaging into the detection of soft tissue properties, a new method is proposed, which detects acoustic emissions caused by magnetization changes in superparamagnetic iron oxide. Methods Starting from an introduction to the theory of acoustically detected Magnetic Particle Imaging, a comparison to magnetically detected Magnetic Particle Imaging is presented. Furthermore, an experimental setup for the detection of acoustic emissions is described, which consists of the necessary field generating components, i.e. coils and permanent magnets, as well as a calibrated microphone to perform the detection. Results The estimated detection limit of acoustic Magnetic Particle Imaging is comparable to the detection limit of magnetic resonance imaging for iron oxide nanoparticles, whereas both are inferior to the theoretical detection limit for magnetically detected Magnetic Particle Imaging. Sufficient data was acquired to perform a comparison to the simulated data. The experimental results are in agreement with the simulations. The remaining differences can be well explained. Conclusions It was possible to demonstrate the detection of acoustic emissions of magnetic tracer materials in Magnetic Particle Imaging. The processing of acoustic emission in addition to the tracer distribution acquired by magnetic detection might allow for the extraction of mechanical tissue parameters. Such parameters, like for example the velocity of sound and the attenuation caused by the tissue, might also be used to support and improve ultrasound imaging. However, the method
Life cycle analysis of electricity systems: Methods and results
International Nuclear Information System (INIS)
Friedrich, R.; Marheineke, T.
1996-01-01
The two methods for full energy chain analysis, process analysis and input/output analysis, are discussed. A combination of these two methods provides the most accurate results. Such a hybrid analysis of the full energy chains of six different power plants is presented and discussed. The results of such analyses depend on time, site and technique of each process step and, therefore have no general validity. For renewable energy systems the emissions form the generation of a back-up system should be added. (author). 7 figs, 1 fig
Solar panel thermal cycling testing by solar simulation and infrared radiation methods
Nuss, H. E.
1980-01-01
For the solar panels of the European Space Agency (ESA) satellites OTS/MAROTS and ECS/MARECS the thermal cycling tests were performed by using solar simulation methods. The performance data of two different solar simulators used and the thermal test results are described. The solar simulation thermal cycling tests for the ECS/MARECS solar panels were carried out with the aid of a rotatable multipanel test rig by which simultaneous testing of three solar panels was possible. As an alternative thermal test method, the capability of an infrared radiation method was studied and infrared simulation tests for the ultralight panel and the INTELSAT 5 solar panels were performed. The setup and the characteristics of the infrared radiation unit using a quartz lamp array of approx. 15 sq and LN2-cooled shutter and the thermal test results are presented. The irradiation uniformity, the solar panel temperature distribution, temperature changing rates for both test methods are compared. Results indicate the infrared simulation is an effective solar panel thermal testing method.
Determinant method and quantum simulations of many-body effects in a single impurity Anderson model
International Nuclear Information System (INIS)
Gubernatis, J.E.; Olson, T.; Scalapino, D.J.; Sugar, R.L.
1985-01-01
A short description is presented of a quantum Monte Carlo technique, often referred to as the determinant method, that has proved useful for simulating many-body effects in systems of interacting fermions at finite temperatures. Preliminary results using this technique on a single impurity Anderson model are reported. Examples of such many-body effects as local moment formation, Kondo behavior, and mixed valence phenomena found in the simulations are shown. 10 refs., 3 figs
Reduction Methods for Real-time Simulations in Hybrid Testing
DEFF Research Database (Denmark)
Andersen, Sebastian
2016-01-01
Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...
Numerical simulation of GEW equation using RBF collocation method
Directory of Open Access Journals (Sweden)
Hamid Panahipour
2012-08-01
Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)
2016-07-07
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.
Bellos, Vasilis; Tsakiris, George
2016-09-01
The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
International Nuclear Information System (INIS)
Fay, P.J.; Ray, J.R.; Wolf, R.J.
1994-01-01
We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature
Energy Technology Data Exchange (ETDEWEB)
Berthiau, G
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
Absolute efficiency calibration of HPGe detector by simulation method
International Nuclear Information System (INIS)
Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.
2018-01-01
High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated
Quench simulation results for a 12-T twin-aperture dipole magnet
Cheng, Da; Salmi, Tiina; Xu, Qingjin; Peng, Quanling; Wang, Chengtao; Wang, Yingzhe; Kong, Ershuai; Zhang, Kai
2018-06-01
A 12-T twin-aperture subscale dipole magnet is being developed for SPPC pre-study at the Institute of High Energy Physics (IHEP). The magnet is comprised of 6 double-pancake coils which include 2 Nb3Sn coils and 4 NbTi coils. As the stored energy of the magnet is 0.452 MJ and the operation margin is only about 20% at 4.2 K, a quick and effective quench protection system is necessary during the test of this high field magnet. For the design of the quench protection system, attention was not only paid to the hotspot temperature and terminal voltage, but also the temperature gradient during the quench process due to the poor mechanical characteristics of the Nb3Sn cables. With the adiabatic analysis, numerical simulation and the finite element simulation, an optimized protection method is adopted, which contains a dump resistor and quench heaters. In this paper, the results of adiabatic analysis and quench simulation, such as current decay, hot-spot temperature and terminal voltage are presented in details.
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
Petascale molecular dynamics simulation using the fast multipole method on K computer
Ohno, Yousuke; Yokota, Rio; Koyama, Hiroshi; Morimoto, Gentaro; Hasegawa, Aki; Masumoto, Gen; Okimoto, Noriaki; Hirano, Yoshinori; Ibeid, Huda; Narumi, Tetsu; Taiji, Makoto
2014-01-01
In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.
Petascale molecular dynamics simulation using the fast multipole method on K computer
Ohno, Yousuke
2014-10-01
In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Le Ber, L.; Calmon, P. [CEA/Saclay, STA, 91 - Gif-sur-Yvette (France); Abittan, E. [Electricite de France (EDF-GDL), 93 - Saint-Denis (France)
2001-07-01
The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)
Discrete vortex method simulations of aerodynamic admittance in bridge aerodynamics
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan
, and to determine aerodynamic forces and the corresponding ﬂutter limit. A simulation of the three-dimensional bridge responseto turbulent wind is carried out by quasi steady theory by modelling the bridge girder as a line like structure [2], applying the aerodynamic load coefﬁcients found from the current version......The meshless and remeshed Discrete Vortex Method (DVM) has been widely used in academia and by the industry to model two-dimensional ﬂow around bluff bodies. The implementation “DVMFLOW” [1] is used by the bridge design company COWI to determine and visualise the ﬂow ﬁeld around bridge sections...
Numerical simulation of compressible two-phase flow using a diffuse interface method
International Nuclear Information System (INIS)
Ansari, M.R.; Daramizadeh, A.
2013-01-01
Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems
Directory of Open Access Journals (Sweden)
Cristina Portalés
2017-06-01
Full Text Available The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users.
A New Method to Simulate Free Surface Flows for Viscoelastic Fluid
Directory of Open Access Journals (Sweden)
Yu Cao
2015-01-01
Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Finite element method for one-dimensional rill erosion simulation on a curved slope
Directory of Open Access Journals (Sweden)
Lijuan Yan
2015-03-01
Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.
A Fuzzy Logic Based Method for Analysing Test Results
Directory of Open Access Journals (Sweden)
Le Xuan Vinh
2017-11-01
Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.
Results of Aging Tests of Vendor-Produced Blended Feed Simulant
International Nuclear Information System (INIS)
Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.
2009-01-01
The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75 F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: (1) stored outside in a 250-gallon tote, (2) stored inside in a gallon plastic bottle, (3) stored inside in a well mixed 5-L tank, and (4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Energy Technology Data Exchange (ETDEWEB)
Morillon, B.
1996-12-31
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.
Energy Technology Data Exchange (ETDEWEB)
Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Innovative teaching methods in the professional training of nurses – simulation education
Directory of Open Access Journals (Sweden)
Michaela Miertová
2013-12-01
Full Text Available Introduction: The article is aimed to highlight usage of innovative teaching methods within simulation education in the professional training of nurses abroad and to present our experience based on passing intensive study programme at School of Nursing, Midwifery and Social Work, University of Salford (United Kingdom, UK within Intensive EU Lifelong Learning Programme (LPP Erasmus EU RADAR 2013. Methods: Implementation of simulation methods such as role-play, case studies, simulation scenarios, practical workshops and clinical skills workstation within structured ABCDE approach (AIM© Assessment and Management Tool was aimed to promote the development of theoretical knowledge and skills to recognize and manage acutely deteriorated patients. Structured SBAR approach (Acute SBAR Communication Tool was used for the training of communication and information sharing among the members of multidisciplinary health care team. OSCE approach (Objective Structured Clinical Examination was used for student’s individual formative assessment. Results: Simulation education is proved to have lots of benefits in the professional training of nurses. It is held in safe, controlled and realistic conditions (in simulation laboratories reflecting real hospital and community care environment with no risk of harming real patients accompanied by debriefing, discussion and analysis of all activities students have performed within simulated scenario. Such learning environment is supportive, challenging, constructive, motivated, engaging, skilled, flexible, inspiring and respectful. Thus the simulation education is effective, interactive, interesting, efficient and modern way of nursing education. Conclusion: Critical thinking and clinical competences of nurses are crucial for early recognition and appropriate response to acute deterioration of patient’s condition. These competences are important to ensure the provision of high quality nursing care. Methods of
Evaluating rehabilitation methods - some practical results from Rum Jungle
International Nuclear Information System (INIS)
Ryan, P.
1987-01-01
Research and analysis of the following aspects of rehabilitation have been conducted at the Rum Jungle mine site over the past three years: drainage structure stability; rock batter stability; soil fauna; tree growth in compacted soils; rehabilitation costs. The results show that, for future rehabilitation projects adopting refined methods, attention to final construction detail and biospheric influences is most important. The mine site offers a unique opportunity to evaluate the success of a variety of rehabilitation methods to the benefit of the industry in Australia overseas. It is intended that practical, economic, research will continue for some considerable time
Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines
Directory of Open Access Journals (Sweden)
Ivo Prah
2016-09-01
Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.
Evaluation of an improved method of simulating lung nodules in chest tomosynthesis
International Nuclear Information System (INIS)
Svalkvist, Angelica; Allansdotter Johnsson, Aase; Vikgren, Jenny
2012-01-01
Background Simulated pathology is a valuable complement to clinical images in studies aiming at evaluating an imaging technique. In order for a study using simulated pathology to be valid, it is important that the simulated pathology in a realistic way reflect the characteristics of real pathology. Purpose To perform a thorough evaluation of a nodule simulation method for chest tomosynthesis, comparing the detection rate and appearance of the artificial nodules with those of real nodules in an observer performance experiment. Material and Methods A cohort consisting of 64 patients, 38 patients with a total of 129 identified pulmonary nodules and 26 patients without identified pulmonary nodules, was used in the study. Simulated nodules, matching the real clinically found pulmonary nodules by size, attenuation, and location, were created and randomly inserted into the tomosynthesis section images of the patients. Three thoracic radiologists and one radiology resident reviewed the images in an observer performance study divided into two parts. The first part included nodule detection and the second part included rating of the visual appearance of the nodules. The results were evaluated using a modified receiver-operating characteristic (ROC) analysis. Results The sensitivities for real and simulated nodules were comparable, as the area under the modified ROC curve (AUC) was close to 0.5 for all observers (range, 0.43-0.55). Even though the ratings of visual appearance for real and simulated nodules overlapped considerably, the statistical analysis revealed that the observers to were able to separate simulated nodules from real nodules (AUC values range 0.70-0.74). Conclusion The simulation method can be used to create artificial lung nodules that have similar detectability as real nodules in chest tomosynthesis, although experienced thoracic radiologists may be able to distinguish them from real nodules
External individual monitoring: experiments and simulations using Monte Carlo Method
International Nuclear Information System (INIS)
Guimaraes, Carla da Costa
2005-01-01
In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF 2 :NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF 2 :NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF 2 :NaCl compound estimated by simulation to be 2,20(25) mm -1 was introduced. Conversion coefficients C p from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low
Simulation of ecological processes using response functions method
International Nuclear Information System (INIS)
Malkina-Pykh, I.G.; Pykh, Yu. A.
1998-01-01
The article describes further development and applications of the already well-known response functions method (MRF). The method is used as a basis for the development of mathematical models of a wide set of ecological processes. The model of radioactive contamination of the ecosystems is chosen as an example. The mathematical model was elaborated for the description of 90 Sr dynamics in the elementary ecosystems of various geographical zones. The model includes the blocks corresponding with the main units of any elementary ecosystem: lower atmosphere, soil, vegetation, surface water. Parameters' evaluation was provided on a wide set of experimental data. A set of computer simulations was done on the model to prove the possibility of the model's use for ecological forecasting
Sultan, A. Z.; Hamzah, N.; Rusdi, M.
2018-01-01
The implementation of concept attainment method based on simulation was used to increase student’s interest in the subjects Engineering of Mechanics in second semester of academic year 2016/2017 in Manufacturing Engineering Program, Department of Mechanical PNUP. The result of the implementation of this learning method shows that there is an increase in the students’ learning interest towards the lecture material which is summarized in the form of interactive simulation CDs and teaching materials in the form of printed books and electronic books. From the implementation of achievement method of this simulation based concept, it is noted that the increase of student participation in the presentation and discussion as well as the deposit of individual assignment of significant student. With the implementation of this method of learning the average student participation reached 89%, which before the application of this learning method only reaches an average of 76%. And also with previous learning method, for exam achievement of A-grade under 5% and D-grade above 8%. After the implementation of the new learning method (simulation based-concept attainment method) the achievement of Agrade has reached more than 30% and D-grade below 1%.
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Quantifying the measurement uncertainty of results from environmental analytical methods.
Moser, J; Wegscheider, W; Sperka-Gottlieb, C
2001-07-01
The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.
Numerical simulation of stratified shear flow using a higher order Taylor series expansion method
Energy Technology Data Exchange (ETDEWEB)
Iwashige, Kengo; Ikeda, Takashi [Hitachi, Ltd. (Japan)
1995-09-01
A higher order Taylor series expansion method is applied to two-dimensional numerical simulation of stratified shear flow. In the present study, central difference scheme-like method is adopted for an even expansion order, and upwind difference scheme-like method is adopted for an odd order, and the expansion order is variable. To evaluate the effects of expansion order upon the numerical results, a stratified shear flow test in a rectangular channel (Reynolds number = 1.7x10{sup 4}) is carried out, and the numerical velocity and temperature fields are compared with experimental results measured by laser Doppler velocimetry thermocouples. The results confirm that the higher and odd order methods can simulate mean velocity distributions, root-mean-square velocity fluctuations, Reynolds stress, temperature distributions, and root-mean-square temperature fluctuations.
Towards numerical simulations of supersonic liquid jets using ghost fluid method
International Nuclear Information System (INIS)
Majidi, Sahand; Afshari, Asghar
2015-01-01
Highlights: • A ghost fluid method based solver is developed for numerical simulation of compressible multiphase flows. • The performance of the numerical tool is validated via several benchmark problems. • Emergence of supersonic liquid jets in quiescent gaseous environment is simulated using ghost fluid method for the first time. • Bow-shock formation ahead of the liquid jet is clearly observed in the obtained numerical results. • Radiation of mach waves from the phase-interface witnessed experimentally is evidently captured in our numerical simulations. - Abstract: A computational tool based on the ghost fluid method (GFM) is developed to study supersonic liquid jets involving strong shocks and contact discontinuities with high density ratios. The solver utilizes constrained reinitialization method and is capable of switching between the exact and approximate Riemann solvers to increase the robustness. The numerical methodology is validated through several benchmark test problems; these include one-dimensional multiphase shock tube problem, shock–bubble interaction, air cavity collapse in water, and underwater-explosion. A comparison between our results and numerical and experimental observations indicate that the developed solver performs well investigating these problems. The code is then used to simulate the emergence of a supersonic liquid jet into a quiescent gaseous medium, which is the very first time to be studied by a ghost fluid method. The results of simulations are in good agreement with the experimental investigations. Also some of the famous flow characteristics, like the propagation of pressure-waves from the liquid jet interface and dependence of the Mach cone structure on the inlet Mach number, are reproduced numerically. The numerical simulations conducted here suggest that the ghost fluid method is an affordable and reliable scheme to study complicated interfacial evolutions in complex multiphase systems such as supersonic liquid
A simple mass-conserved level set method for simulation of multiphase flows
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
Directory of Open Access Journals (Sweden)
C J Sanjay
2009-01-01
Full Text Available Objective : To evaluate and compare the efficacy of conventional and digital radiographic methods in the detection of simulated external root resorption cavities and also to evaluate whether the detectability was influenced by resorption cavity sizes. Methods : Thirty-two selected teeth from human dentate mandibles were radiographed in orthoradial, mesioradial and distoradial aspect using conventional film (Insight Kodak F-speed; Eastman Kodak, Rochester, NY and a digital sensor (Trophy RVG advanced imaging system with 0.7mm and 1.0mm deep cavities prepared on their vestibular, mesial and distal surfaces at the cervical, middle and apical thirds. Three dental professionals, an endodontist, a radiologist and a general practitioner, evaluated the images twice with a one-week time interval. Results : No statistical significance was seen in the first observation for both conventional and digital radiographic methods in the detection of simulated external root resorptions and for small and medium cavities but statistical difference was noted in the second observation (P< 0.001 for both the methods. Conclusion : Considering the methodology and the overall results, conventional radiographic method (F-speed performed slightly better than the digital radiographic method in the detection of simulated radiographic method but better consistency was seen with the digital system. Overall size of the resorption cavity had no influence on the performance of both methods and suggests that initial external root resorption lesion is not well-appreciated with both the methods as compared to the advanced lesion.
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is
A machine learning method for the prediction of receptor activation in the simulation of synapses.
Directory of Open Access Journals (Sweden)
Jesus Montes
Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of
Directory of Open Access Journals (Sweden)
Xuechun Liu
2015-01-01
Full Text Available To solve the shortage of traditional construction simulation methods for suspended dome structures, based on friction elements, node coupling technology, and local cooling, the cable tension preslack method is proposed in this paper, which is suitable for the whole process construction simulation of a suspended dome. This method was used to simulate the construction process of a large-span suspended dome case study. The effects on the simulation results of location deviation of joints, construction temperature, construction temporary supports, and friction of the cable-support joints were analyzed. The cable tension preslack method was demonstrated by comparing the data from the construction simulation with measured results, providing the control cable tension and the control standards for construction acceptance. The analysis demonstrated that the position deviation of the joint has little effect on the control value; the construction temperature and the friction of the cable-support joint significantly affect the control cable tension. The construction temperature, the temporary construction supports, and the friction of the cable-support joints all affect the internal force and deflection in the tensioned state but do not significantly affect the structural bearing characteristics at the load state. The forces should be primarily controlled in tensioned construction, while the deflections are controlled secondarily.
A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.
Ling, Hong; Luo, Ercang; Dai, Wei
2006-12-22
Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.
Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B
2014-09-01
To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Polarimetric Emission of Rain Events: Simulation and Experimental Results at X-Band
Directory of Open Access Journals (Sweden)
Nuria Duffo
2009-06-01
Full Text Available Accurate models are used today for infrared and microwave satellite radiance simulations of the first two Stokes elements in the physical retrieval, data assimilation etc. of surface and atmospheric parameters. Although in the past a number of theoretical and experimental works have studied the polarimetric emission of some natural surfaces, specially the sea surface roughened by the wind (Windsat mission, very limited studies have been conducted on the polarimetric emission of rain cells or other natural surfaces. In this work, the polarimetric emission (four Stokes elements of a rain cell is computed using the polarimetric radiative transfer equation assuming that raindrops are described by Pruppacher-Pitter shapes and that their size distribution follows the Laws-Parsons law. The Boundary Element Method (BEM is used to compute the exact bistatic scattering coefficients for each raindrop shape and different canting angles. Numerical results are compared to the Rayleigh or Mie scattering coefficients, and to Oguchi’s ones, showing that above 1-2 mm raindrop size the exact formulation is required to model properly the scattering. Simulation results using BEM are then compared to the experimental data gathered with a X-band polarimetric radiometer. It is found that the depolarization of the radiation caused by the scattering of non-spherical raindrops induces a non-zero third Stokes parameter, and the differential phase of the scattering coefficients induces a non-zero fourth Stokes parameter.
RESULTS OF COPPER CATALYZED PEROXIDE OXIDATION (CCPO) OF TANK 48H SIMULANTS
Energy Technology Data Exchange (ETDEWEB)
Peters, T.; Pareizs, J.; Newell, J.; Fondeur, F.; Nash, C.; White, T.; Fink, S.
2012-08-14
Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. The following observations were made with respect to the major processing variables investigated. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. Testing with simulated slurries continues. Current testing is examining lower copper concentrations, refined peroxide addition rates, and alternate acidification methods. A revision of this report will provide updated findings with emphasis on defining recommended conditions for similar tests with actual waste samples.
Simulation of anisotropic diffusion by means of a diffusion velocity method
Beaudoin, A; Rivoalen, E
2003-01-01
An alternative method to the Particle Strength Exchange method for solving the advection-diffusion equation in the general case of a non-isotropic and non-uniform diffusion is proposed. This method is an extension of the diffusion velocity method. It is shown that this extension is quite straightforward due to the explicit use of the diffusion flux in the expression of the diffusion velocity. This approach is used to simulate pollutant transport in groundwater and the results are compared to those of the PSE method presented in an earlier study by Zimmermann et al.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
International Nuclear Information System (INIS)
Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H
2012-01-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)
Merrikh-Bayat, Farshad
2011-04-01
One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Fan Yuxin
2014-12-01
Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
International Nuclear Information System (INIS)
Mueller-Sievers, K.; Kober, B.
1997-01-01
Background: Since 1990 we follow a quality assurance program with periodical tests of functional performance values of a 16-year-old simulator. Material and Method: For this purpose we adopted and modified German standards for quality assurance on linear accelerators and international standards elaborated for simulators (International Electrotechnical Commission). The tests are subdivided into daily visual checks (light field indication, optical distance indicator, isocentre-indicating devices, indication of gantry and collimator angles) and monthly and annually tests of relevant simulator parameters. Some important examples demonstrate the small variation of parameters over 6 years: Position of the light field centre when rotating the collimator, diameter of the isocentre circle when rotating the gantry, accuracy of the isocentre indication device, and coincidence of light field and simulated radiation field. Results: As an important result we can state, that by these rigid periodic tests it was possible to detect and compensate deteriorations of simulators quality rapidly. Conclusions: Technical improvements and specific calling-in of maintenance personnel whenever felt appropriate provided performance characteristics of our old simulator which are required by international recommendations as a basis for modern radiotherapy. (orig.) [de
International Nuclear Information System (INIS)
Ganjaei, A. A.; Nourazar, S. S.
2009-01-01
A new algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, for the simulation of Couette- Taylor gas flow problem is developed. The Taylor series expansion is used to obtain the modified equation of the first order time discretization of the collision equation and the new algorithm, MDSMC, is implemented to simulate the collision equation in the Boltzmann equation. In the new algorithm (MDSMC) there exists a new extra term which takes in to account the effect of the second order collision. This new extra term has the effect of enhancing the appearance of the first Taylor instabilities of vortices streamlines. In the new algorithm (MDSMC) there also exists a second order term in time step in the probabilistic coefficients which has the effect of simulation with higher accuracy than the previous DSMC algorithm. The appearance of the first Taylor instabilities of vortices streamlines using the MDSMC algorithm at different ratios of ω/ν (experimental data of Taylor) occurred at less time-step than using the DSMC algorithm. The results of the torque developed on the stationary cylinder using the MDSMC algorithm show better agreement in comparison with the experimental data of Kuhlthau than the results of the torque developed on the stationary cylinder using the DSMC algorithm
Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods
Directory of Open Access Journals (Sweden)
L. Brancik
2011-04-01
Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.
Simulation of isothermal multi-phase fuel-coolant interaction using MPS method with GPU acceleration
Energy Technology Data Exchange (ETDEWEB)
Gou, W.; Zhang, S.; Zheng, Y. [Zhejiang Univ., Hangzhou (China). Center for Engineering and Scientific Computation
2016-07-15
The energetic fuel-coolant interaction (FCI) has been one of the primary safety concerns in nuclear power plants. Graphical processing unit (GPU) implementation of the moving particle semi-implicit (MPS) method is presented and used to simulate the fuel coolant interaction problem. The governing equations are discretized with the particle interaction model of MPS. Detailed implementation on single-GPU is introduced. The three-dimensional broken dam is simulated to verify the developed GPU acceleration MPS method. The proposed GPU acceleration algorithm and developed code are then used to simulate the FCI problem. As a summary of results, the developed GPU-MPS method showed a good agreement with the experimental observation and theoretical prediction.
Achieving better cooling of turbine blades using numerical simulation methods
Inozemtsev, A. A.; Tikhonov, A. S.; Sendyurev, C. I.; Samokhvalov, N. Yu.
2013-02-01
A new design of the first-stage nozzle vane for the turbine of a prospective gas-turbine engine is considered. The blade's thermal state is numerically simulated in conjugate statement using the ANSYS CFX 13.0 software package. Critical locations in the blade design are determined from the distribution of heat fluxes, and measures aimed at achieving more efficient cooling are analyzed. Essentially lower (by 50-100°C) maximal temperature of metal has been achieved owing to the results of the performed work.
Modelling and simulation of diffusive processes methods and applications
Basu, SK
2014-01-01
This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation
DEFF Research Database (Denmark)
Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær
2017-01-01
surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple......Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...
[Adverse events management. Methods and results of a development project].
Rabøl, Louise Isager; Jensen, Elisabeth Brøgger; Hellebek, Annemarie H; Pedersen, Beth Lilja
2006-11-27
This article describes the methods and results of a project in the Copenhagen Hospital Corporation (H:S) on preventing adverse events. The aim of the project was to raise awareness about patients' safety, test a reporting system for adverse events, develop and test methods of analysis of events and propagate ideas about how to prevent adverse events. H:S developed an action plan and a reporting system for adverse events, founded an organization and developed an educational program on theories and methods of learning from adverse events for both leaders and employees. During the three-year period from 1 January 2002 to 31 December 2004, the H:S staff reported 6011 adverse events. In the same period, the organization completed 92 root cause analyses. More than half of these dealt with events that had been optional to report, the other half events that had been mandatory to report. The number of reports and the front-line staff's attitude towards reporting shows that the H:S succeeded in founding a safety culture. Future work should be centred on developing and testing methods that will prevent adverse events from happening. The objective is to suggest and complete preventive initiatives which will help increase patient safety.
A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study.
Ahmed, Rami; Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott
2016-03-16
OBJECTIVE : The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential. A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. CONCLUSIONS : A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
International Nuclear Information System (INIS)
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-01-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient
Comparisons of the simulation results using different codes for ADS spallation target
International Nuclear Information System (INIS)
Yu Hongwei; Fan Sheng; Shen Qingbiao; Zhao Zhixiang; Wan Junsheng
2002-01-01
The calculations to the standard thick target were made by using different codes. The simulation of the thick Pb target with length of 60 cm, diameter of 20 cm bombarded with 800, 1000, 1500 and 2000 MeV energetic proton beam was carried out. The yields and the spectra of emitted neutron were studied. The spallation target was simulated by SNSP, SHIELD, DCM/CEM (Dubna Cascade Model /Cascade Evaporation Mode) and LAHET codes. The Simulation Results were compared with experiments. The comparisons show good agreement between the experiments and the SNSP simulated leakage neutron yield. The SHIELD simulated leakage neutron spectra are in good agreement with the LAHET and the DCM/CEM simulated leakage neutron spectra
Simulations of Micro Gas Flows by the DS-BGK Method
Li, Jun
2011-01-01
For gas flows in micro devices, the molecular mean free path is of the same order as the characteristic scale making the Navier-Stokes equation invalid. Recently, some micro gas flows are simulated by the DS-BGK method, which is convergent to the BGK equation and very efficient for low-velocity cases. As the molecular reflection on the boundary is the dominant effect compared to the intermolecular collisions in micro gas flows, the more realistic boundary condition, namely the CLL reflection model, is employed in the DS-BGK simulation and the influence of the accommodation coefficients used in the molecular reflection model on the results are discussed. The simulation results are verified by comparison with those of the DSMC method as criteria. Copyright © 2011 by ASME.
Processing method and results of meteor shower radar observations
International Nuclear Information System (INIS)
Belkovich, O.I.; Suleimanov, N.I.; Tokhtasjev, V.S.
1987-01-01
Studies of meteor showers permit the solving of some principal problems of meteor astronomy: to obtain the structure of a stream in cross section and along its orbits; to retrace the evolution of particle orbits of the stream taking into account gravitational and nongravitational forces and to discover the orbital elements of its parent body; to find out the total mass of solid particles ejected from the parent body taking into account physical and chemical evolution of meteor bodies; and to use meteor streams as natural probes for investigation of the average characteristics of the meteor complex in the solar system. A simple and effective method of determining the flux density and mass exponent parameter was worked out. This method and its results are discussed
Method of vacuum correlation functions: Results and prospects
International Nuclear Information System (INIS)
Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.
2006-01-01
Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s
Application of NUREG-1150 methods and results to accident management
International Nuclear Information System (INIS)
Dingman, S.; Sype, T.; Camp, A.; Maloney, K.
1991-01-01
The use of NUREG-1150 and similar probabilistic risk assessments in the Nuclear Regulatory Commission (NRC) and industry risk management programs is discussed. Risk management is more comprehensive than the commonly used term accident management. Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed
Application of NUREG-1150 methods and results to accident management
International Nuclear Information System (INIS)
Dingman, S.; Sype, T.; Camp, A.; Maloney, K.
1990-01-01
The use of NUREG-1150 and similar Probabilistic Risk Assessments in NRC and industry risk management programs is discussed. ''Risk management'' is more comprehensive than the commonly used term ''accident management.'' Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed. 2 refs., 3 figs
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement
Directory of Open Access Journals (Sweden)
Yanzheng Liu
2014-03-01
Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.
The Multiscale Material Point Method for Simulating Transient Responses
Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas
2015-06-01
To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.
Numerical Simulation of Antennas with Improved Integral Equation Method
International Nuclear Information System (INIS)
Ma Ji; Fang Guang-You; Lu Wei
2015-01-01
Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)
An experiment teaching method based on the Optisystem simulation platform
Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan
2017-08-01
The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.
Spectrally constrained NIR tomography for breast imaging: simulations and clinical results
Srinivasan, Subhadra; Pogue, Brian W.; Jiang, Shudong; Dehghani, Hamid; Paulsen, Keith D.
2005-04-01
A multi-spectral direct chromophore and scattering reconstruction for frequency domain NIR tomography has been implemented using constraints of the known molar spectra of the chromophores and a Mie theory approximation for scattering. This was tested in a tumor-simulating phantom containing an inclusion with higher hemoglobin, lower oxygenation and contrast in scatter. The recovered images were quantitatively accurate and showed substantial improvement over existing methods; and in addition, showed robust results tested for up to 5% noise in amplitude and phase measurements. When applied to a clinical subject with fibrocystic disease, the tumor was visible in hemoglobin and water, but no decrease in oxygenation was observed, making oxygen saturation, a potential diagnostic indicator.
DoSSiER: Database of Scientific Simulation and Experimental Results
Wenzel, Hans; Genser, Krzysztof; Elvira, Daniel; Pokorski, Witold; Carminati, Federico; Konstantinov, Dmitri; Ribon, Alberto; Folger, Gunter; Dotti, Andrea
2017-01-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
PALEOEARTHQUAKES IN THE PRIBAIKALIE: METHODS AND RESULTS OF DATING
Directory of Open Access Journals (Sweden)
Oleg P. Smekalin
2010-01-01
Full Text Available In the Pribaikalie and adjacent territories, seismogeological studies have been underway for almost a half of the century and resulted in discovery of more than 70 dislocations of seismic or presumably seismic origin. With commencement of paleoseismic studies, dating of paleo-earthquakes was focused on as an indicator useful for long-term prediction of strong earthquakes. V.P. Solonenko [Solonenko, 1977] distinguished five methods for dating paleoseismogenic deformations, i.e. geological, engineering geological, historico-archeological, dendrochronological and radiocarbon methods. However, ages of the majority of seismic deformations, which were subject to studies at the initial stage of development of seismogeology in Siberia, were defined by methods of relative or correlation age determination.Since the 1980s, studies of seismogenic deformation in the Pribaikalie have been widely conducted with trenching. Mass sampling, followed with radiocarbon analyses and definition of absolute ages of paleo-earthquakes, provided new data on seismic regimes of the territory and rates of and recent displacements along active faults, and enhanced validity of methods of relative dating, in particular morphometry. Capacities of the morphometry method has significantly increased with introduction of laser techniques in surveys and digital processing of 3D relief models.Comprehensive seismogeological studies conducted in the Pribaikalie revealed 43 paleo-events within 16 seismogenic structures. Absolute ages of 18 paleo-events were defined by the radiocarbon age determination method. Judging by their ages, a number of dislocations were related with historical earthquakes which occurred in the 18th and 19th centuries, yet any reliable data on epicenters of such events are not available. The absolute and relative dating methods allowed us to identify sections in some paleoseismogenic structures by differences in ages of activation and thus provided new data for
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise
A regularized vortex-particle mesh method for large eddy simulation
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
A method to solve the aircraft magnetic field model basing on geomagnetic environment simulation
International Nuclear Information System (INIS)
Lin, Chunsheng; Zhou, Jian-jun; Yang, Zhen-yu
2015-01-01
In aeromagnetic survey, it is difficult to solve the aircraft magnetic field model by flying for some unman controlled or disposable aircrafts. So a model solving method on the ground is proposed. The method simulates the geomagnetic environment where the aircraft is flying and creates the background magnetic field samples which is the same as the magnetic field arose by aircraft’s maneuvering. Then the aircraft magnetic field model can be solved by collecting the magnetic field samples. The method to simulate the magnetic environment and the method to control the errors are presented as well. Finally, an experiment is done for verification. The result shows that the model solving precision and stability by the method is well. The calculated model parameters by the method in one district can be used in worldwide districts as well. - Highlights: • A method to solve the aircraft magnetic field model on the ground is proposed. • The method solves the model by simulating dynamic geomagnetic environment as in the real flying. • The way to control the error of the method was analyzed. • An experiment is done for verification
International Nuclear Information System (INIS)
Soares, Eufemia Paez; Saiki, Mitiko; Wiebeck, Helio
2005-01-01
In the present study a radiometric method was established to determine the migration of elements from food plastic packagings to a simulated acetic acid solution. This radiometric method consisted of irradiating plastic samples with neutrons at IEA-R1 nuclear reactor for a period of 16 hours under a neutron flux of 10 12 n cm -2 s -1 and, then to expose them to the element migration into a simulated solution. The radioactivity of the activated elements transferred to the solutions was measured to evaluate the migration. The experimental conditions were: time of exposure of 10 days at 40 deg C and 3% acetic acid solution was used as simulated solution, according to the procedure established by the National Agency of Sanitary Monitoring (ANVISA). The migration study was applied for plastic samples from soft drink and juice packagings. The results obtained indicated the migration of elements Co, Cr and Sb. The advantage of this methodology was no need to analyse the blank of simulantes, as well as the use of high purity simulated solutions. Besides, the method allows to evaluate the migration of the elements into the food content instead of simulated solution. The detention limits indicated high sensitivity of the radiometric method. (author)
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
Performance of various mathematical methods for calculation of radioimmunoassay results
International Nuclear Information System (INIS)
Sandel, P.; Vogt, W.
1977-01-01
Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de
Simulation of granular and gas-solid flows using discrete element method
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D
Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method
International Nuclear Information System (INIS)
Zhang Xu; Tan Duowang
2009-01-01
A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)
Direct Numerical Simulation of the Rayleigh−Taylor Instability with the Spectral Element Method
International Nuclear Information System (INIS)
Xu, Zhang; Duo-Wang, Tan
2009-01-01
A novel method is proposed to simulate Rayleigh−Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier–Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh−Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh–Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh−Taylor instabilities of turbulent flows. (fundamental areas of phenomenology (including applications))
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Application of Macro Response Monte Carlo method for electron spectrum simulation
International Nuclear Information System (INIS)
Perles, L.A.; Almeida, A. de
2007-01-01
During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations
A method of simulating intensity modulation-direct detection WDM systems
Institute of Scientific and Technical Information of China (English)
HUANG Jing; YAO Jian-quan; LI En-bang
2005-01-01
In the simulation of Intensity Modulation-Direct Detection WDM Systems,when the dispersion and nonlinear effects play equally important roles,the intensity fluctuation caused by cross-phase modulation may be overestimated as a result of the improper step size.Therefore,the step size in numerical simulation should be selected to suppress false XPM intensity modulation (keep it much less than signal power).According to this criterion,the step size is variable along the fiber.For a WDM system,the step size depends on the channel separation.Different type of transmission fiber has different step size.In the split-step Fourier method,this criterion can reduce simulation time,and when the step size is bigger than 100 meters,the simulation accuracy can also be improved.
Directory of Open Access Journals (Sweden)
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
1991-04-01
Results from vehicle computer simulations usually take the form of numeric data or graphs. While these graphs provide the investigator with the insight into vehicle behavior, it may be difficult to use these graphs to assess complex vehicle motion. C...
An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers
Chen, S.; Hanzo, L.
2000-01-01
An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.
Simulation Results: Optimization of Contact Ratio for Interdigitated Back-Contact Solar Cells
Directory of Open Access Journals (Sweden)
Vinay Budhraja
2017-01-01
Full Text Available In the fabrication of interdigitated back contact (IBC solar cells, it is very important to choose the right size of contact to achieve the maximum efficiency. Line contacts and point contacts are the two possibilities, which are being chosen for IBC structure. It is expected that the point contacts would give better results because of the reduced recombination rate. In this work, we are simulating the effect of contact size on the performance of IBC solar cells. Simulations were done in three dimension using Quokka, which numerically solves the charge carrier transport. Our simulation results show that around 10% of contact ratio is able to achieve optimum cell efficiency.
Simulation of crystalline pattern formation by the MPFC method
Directory of Open Access Journals (Sweden)
Starodumov Ilya
2017-01-01
Full Text Available The Phase Field Crystal model in hyperbolic formulation (modified PFC or MPFC, is investigated as one of the most promising techniques for modeling the formation of crystal patterns. MPFC is a convenient and fundamentally based description linking nano-and meso-scale processes in the evolution of crystal structures. The presented model is a powerful tool for mathematical modeling of the various operations in manufacturing. Among them is the definition of process conditions for the production of metal castings with predetermined properties, the prediction of defects in the crystal structure during casting, the evaluation of quality of special coatings, and others. Our paper presents the structure diagram which was calculated for the one-mode MPFC model and compared to the results of numerical simulation for the fast phase transitions. The diagram is verified by the numerical simulation and also strongly correlates to the previously calculated diagrams. The computations have been performed using software based on the effective parallel computational algorithm.
Numerical simulation methods of fires in nuclear power plants
International Nuclear Information System (INIS)
Keski-Rahkonen, O.; Bjoerkman, J.; Heikkilae, L.
1992-01-01
Fire is a significant hazard to the safety of nuclear power plants (NPP). Fire may be serious accident as such, but even small fire at a critical point in a NPP may cause an accident much more serious than fire itself. According to risk assessments a fire may be an initial cause or a contributing factor in a large part of reactor accidents. At the Fire Technology and the the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) fire safety research for NPPs has been carried out in a large extent since 1985. During years 1988-92 a project Advanced Numerical Modelling in Nuclear Power Plants (PALOME) was carried out. In the project the level of numerical modelling for fire research in Finland was improved by acquiring, preparing for use and developing numerical fire simulation programs. Large scale test data of the German experimental program (PHDR Sicherheitsprogramm in Kernforschungscentral Karlsruhe) has been as reference. The large scale tests were simulated by numerical codes and results were compared to calculations carried out by others. Scientific interaction with outstanding foreign laboratories and scientists has been an important part of the project. This report describes the work of PALOME-project carried out at the Fire Technology Laboratory only. A report on the work at the Nuclear Engineering Laboratory will be published separatively. (au)
Numerical simulation for cracks detection using the finite elements method
Directory of Open Access Journals (Sweden)
S Bennoud
2016-09-01
Full Text Available The means of detection must ensure controls either during initial construction, or at the time of exploitation of all parts. The Non destructive testing (NDT gathers the most widespread methods for detecting defects of a part or review the integrity of a structure. In the areas of advanced industry (aeronautics, aerospace, nuclear …, assessing the damage of materials is a key point to control durability and reliability of parts and materials in service. In this context, it is necessary to quantify the damage and identify the different mechanisms responsible for the progress of this damage. It is therefore essential to characterize materials and identify the most sensitive indicators attached to damage to prevent their destruction and use them optimally. In this work, simulation by finite elements method is realized with aim to calculate the electromagnetic energy of interaction: probe and piece (with/without defect. From calculated energy, we deduce the real and imaginary components of the impedance which enables to determine the characteristic parameters of a crack in various metallic parts.
Methods employed to speed up Cathare for simulation uses
International Nuclear Information System (INIS)
Agator, J.M.
1992-01-01
This paper describes the main methods used to speed up the french advanced thermal-hydraulic computer code CATHARE and build a speedy version, called CATHARE-SIMU, adapted to real time calculations and simulation environment. Since CATHARE-SIMU, like CATHARE, uses a numerical scheme based on a fully implicit Newton's iterative method, and therefore with a variable time step, two ways have been explored to reduce the computing time: avoidance of short time steps, and so minimization of the number of iterations per time step, reduction of the computing time needed for an iteration. CATHARE-SIMU uses the same physical laws and correlations as in CATHARE with only some minor simplifications. This was considered the only way to be sure to maintain the level of physical relevance of CATHARE. Finally it is indicated that the validation programme of CATHARE-SIMU includes a set of 33 transient calculations, referring either to CATHARE for two-phase transients, or to measurements on real plants for operational transients
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
Energy Technology Data Exchange (ETDEWEB)
Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.
2018-03-28
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.
An at-site flood estimation method in the context of nonstationarity I. A simulation study
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.
Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak
2012-01-01
In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault
Mirams, Gary R; Davies, Mark R; Brough, Stephen J; Bridgland-Taylor, Matthew H; Cui, Yi; Gavaghan, David J; Abi-Gerges, Najah
2014-01-01
Detection of drug-induced pro-arrhythmic risk is a primary concern for pharmaceutical companies and regulators. Increased risk is linked to prolongation of the QT interval on the body surface ECG. Recent studies have shown that multiple ion channel interactions can be required to predict changes in ventricular repolarisation and therefore QT intervals. In this study we attempt to predict the result of the human clinical Thorough QT (TQT) study, using multiple ion channel screening which is available early in drug development. Ion current reduction was measured, in the presence of marketed drugs which have had a TQT study, for channels encoded by hERG, CaV1.2, NaV1.5, KCNQ1/MinK, and Kv4.3/KChIP2.2. The screen was performed on two platforms - IonWorks Quattro (all 5 channels, 34 compounds), and IonWorks Barracuda (hERG & CaV1.2, 26 compounds). Concentration-effect curves were fitted to the resulting data, and used to calculate a percentage reduction in each current at a given concentration. Action potential simulations were then performed using the ten Tusscher and Panfilov (2006), Grandi et al. (2010) and O'Hara et al. (2011) human ventricular action potential models, pacing at 1Hz and running to steady state, for a range of concentrations. We compared simulated action potential duration predictions with the QT prolongation observed in the TQT studies. At the estimated concentrations, simulations tended to underestimate any observed QT prolongation. When considering a wider range of concentrations, and conventional patch clamp rather than screening data for hERG, prolongation of ≥5ms was predicted with up to 79% sensitivity and 100% specificity. This study provides a proof-of-principle for the prediction of human TQT study results using data available early in drug development. We highlight a number of areas that need refinement to improve the method's predictive power, but the results suggest that such approaches will provide a useful tool in cardiac safety
Directory of Open Access Journals (Sweden)
Latimer Nicholas
2011-01-01
Full Text Available Abstract Background We investigate methods used to analyse the results of clinical trials with survival outcomes in which some patients switch from their allocated treatment to another trial treatment. These included simple methods which are commonly used in medical literature and may be subject to selection bias if patients switching are not typical of the population as a whole. Methods which attempt to adjust the estimated treatment effect, either through adjustment to the hazard ratio or via accelerated failure time models, were also considered. A simulation study was conducted to assess the performance of each method in a number of different scenarios. Results 16 different scenarios were identified which differed by the proportion of patients switching, underlying prognosis of switchers and the size of true treatment effect. 1000 datasets were simulated for each of these and all methods applied. Selection bias was observed in simple methods when the difference in survival between switchers and non-switchers were large. A number of methods, particularly the AFT method of Branson and Whitehead were found to give less biased estimates of the true treatment effect in these situations. Conclusions Simple methods are often not appropriate to deal with treatment switching. Alternative approaches such as the Branson & Whitehead method to adjust for switching should be considered.
Simulation of Jetting in Injection Molding Using a Finite Volume Method
Directory of Open Access Journals (Sweden)
Shaozhen Hua
2016-05-01
Full Text Available In order to predict the jetting and the subsequent buckling flow more accurately, a three dimensional melt flow model was established on a viscous, incompressible, and non-isothermal fluid, and a control volume-based finite volume method was employed to discretize the governing equations. A two-fold iterative method was proposed to decouple the dependence among pressure, velocity, and temperature so as to reduce the computation and improve the numerical stability. Based on the proposed theoretical model and numerical method, a program code was developed to simulate melt front progress and flow fields. The numerical simulations for different injection speeds, melt temperatures, and gate locations were carried out to explore the jetting mechanism. The results indicate the filling pattern depends on the competition between inertial and viscous forces. When inertial force exceeds the viscous force jetting occurs, then it changes to a buckling flow as the viscous force competes over the inertial force. Once the melt contacts with the mold wall, the melt filling switches to conventional sequential filling mode. Numerical results also indicate jetting length increases with injection speed but changes little with melt temperature. The reasonable agreements between simulated and experimental jetting length and buckling frequency imply the proposed method is valid for jetting simulation.
New method of processing heat treatment experiments with numerical simulation support
Kik, T.; Moravec, J.; Novakova, I.
2017-08-01
In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
A Modified SPH Method for Dynamic Failure Simulation of Heterogeneous Material
Directory of Open Access Journals (Sweden)
G. W. Ma
2014-01-01
Full Text Available A modified smoothed particle hydrodynamics (SPH method is applied to simulate the failure process of heterogeneous materials. An elastoplastic damage model based on an extension form of the unified twin shear strength (UTSS criterion is adopted. Polycrystalline modeling is introduced to generate the artificial microstructure of specimen for the dynamic simulation of Brazilian splitting test and uniaxial compression test. The strain rate effect on the predicted dynamic tensile and compressive strength is discussed. The final failure patterns and the dynamic strength increments demonstrate good agreements with experimental results. It is illustrated that the polycrystalline modeling approach combined with the SPH method is promising to simulate more complex failure process of heterogeneous materials.
Methodics of computing the results of monitoring the exploratory gallery
Directory of Open Access Journals (Sweden)
Krúpa Víazoslav
2000-09-01
Full Text Available At building site of motorway tunnel Viòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Viòové Dubná skala motorway tunnel.
A Comparative Study on the Refueling Simulation Method for a CANDU Reactor
Energy Technology Data Exchange (ETDEWEB)
Do, Quang Binh; Choi, Hang Bok; Roh, Gyu Hong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
2006-07-01
The Canada deuterium uranium (CANDU) reactor calculation is typically performed by the RFSP code to obtain the power distribution upon a refueling. In order to assess the equilibrium behavior of the CANDU reactor, a few methods were suggested for a selection of the refueling channel. For example, an automatic refueling channel selection method (AUTOREFUEL) and a deterministic method (GENOVA) were developed, which were based on a reactor's operation experience and the generalized perturbation theory, respectively. Both programs were designed to keep the zone controller unit (ZCU) water level within a reasonable range during a continuous refueling simulation. However, a global optimization of the refueling simulation, that includes constraints on the discharge burn-up, maximum channel power (MCP), maximum bundle power (MBP), channel power peaking factor (CPPF) and the ZCU water level, was not achieved. In this study, an evolutionary algorithm, which is indeed a hybrid method based on the genetic algorithm, the elitism strategy and the heuristic rules for a multi-cycle and multi-objective optimization of the refueling simulation has been developed for the CANDU reactor. This paper presents the optimization model of the genetic algorithm and compares the results with those obtained by other simulation methods.
International Nuclear Information System (INIS)
Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.
2010-01-01
Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made
A method for modeling laterally asymmetric proton beamlets resulting from collimation
Energy Technology Data Exchange (ETDEWEB)
Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)
2015-03-15
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.
A method for modeling laterally asymmetric proton beamlets resulting from collimation
International Nuclear Information System (INIS)
Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark
2015-01-01
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets
A method for modeling laterally asymmetric proton beamlets resulting from collimation
Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.
2015-01-01
Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287
Simulation Loop between CAD systems, Geant4 and GeoModel: Implementation and Results
Sharmazanashvili, Alexander; The ATLAS collaboration
2015-01-01
Data_vs_MonteCarlo discrepancy is one of the most important field of investigation for ATLAS simulation studies. There are several reasons of above mentioned discrepancies but primary interest is falling on geometry studies and investigation of how geometry descriptions of detector in simulation adequately representing “as-built” descriptions. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: 1/ Inconsistency to “as-built” geometry descriptions; 2/Internal inaccuracies of transactions added by simulation packages itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML/Persint->CATIA; IV/VP1->CATIA; GeoModel->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each othe...
A general method for closed-loop inverse simulation of helicopter maneuver flight
Directory of Open Access Journals (Sweden)
Wei WU
2017-12-01
Full Text Available Maneuverability is a key factor to determine whether a helicopter could finish certain flight missions successfully or not. Inverse simulation is commonly used to calculate the pilot controls of a helicopter to complete a certain kind of maneuver flight and to assess its maneuverability. A general method for inverse simulation of maneuver flight for helicopters with the flight control system online is developed in this paper. A general mathematical describing function is established to provide mathematical descriptions of different kinds of maneuvers. A comprehensive control solver based on the optimal linear quadratic regulator theory is developed to calculate the pilot controls of different maneuvers. The coupling problem between pilot controls and flight control system outputs is well solved by taking the flight control system model into the control solver. Inverse simulation of three different kinds of maneuvers with different agility requirements defined in the ADS-33E-PRF is implemented based on the developed method for a UH-60 helicopter. The results show that the method developed in this paper can solve the closed-loop inverse simulation problem of helicopter maneuver flight with high reliability as well as efficiency. Keywords: Closed-loop, Flying quality, Helicopters, Inverse simulation, Maneuver flight
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222
International Nuclear Information System (INIS)
Shen, H.; Li, Z.; Wang, K.; Yu, G.
2010-01-01
A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)
Simulation of regimes of convection and plume dynamics by the thermal Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2018-02-01
We present 2D simulations using the Lattice Boltzmann Method (LBM) of a fluid in a rectangular box being heated from below, and cooled from above. We observe plumes, hot narrow upwellings from the base, and down-going cold chutes from the top. We have varied both the Rayleigh numbers and the Prandtl numbers respectively from Ra = 1000 to Ra =1010 , and Pr = 1 through Pr = 5 ×104 , leading to Rayleigh-Bénard convection cells at low Rayleigh numbers through to vigorous convection and unstable plumes with pronounced vortices and eddies at high Rayleigh numbers. We conduct simulations with high Prandtl numbers up to Pr = 50, 000 to simulate in the inertial regime. We find for cases when Pr ⩾ 100 that we obtain a series of narrow plumes of upwelling fluid with mushroom heads and chutes of downwelling fluid. We also present simulations at a Prandtl number of 0.7 for Rayleigh numbers varying from Ra =104 through Ra =107.5 . We demonstrate that the Nusselt number follows power law scaling of form Nu ∼Raγ where γ = 0.279 ± 0.002 , which is consistent with published results of γ = 0.281 in the literature. These results show that the LBM is capable of reproducing results obtained with classical macroscopic methods such as spectral methods, and demonstrate the great potential of the LBM for studying thermal convection and plume dynamics relevant to geodynamics.
Miao, Linling; Young, Charles D.; Sing, Charles E.
2017-07-01
Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.
Liu, J. X.; Deng, S. C.; Liang, N. G.
2008-02-01
Concrete is heterogeneous and usually described as a three-phase material, where matrix, aggregate and interface are distinguished. To take this heterogeneity into consideration, the Generalized Beam (GB) lattice model is adopted. The GB lattice model is much more computationally efficient than the beam lattice model. Numerical procedures of both quasi-static method and dynamic method are developed to simulate fracture processes in uniaxial tensile tests conducted on a concrete panel. Cases of different loading rates are compared with the quasi-static case. It is found that the inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, an unrealistic result will be obtained if a fracture process including unstable cracking is simulated by the quasi-static procedure.
Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media
Directory of Open Access Journals (Sweden)
Jun Li
2017-01-01
Full Text Available An upscaled Lattice Boltzmann Method (LBM for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.
Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media
Li, Jun
2017-02-16
An upscaled Lattice Boltzmann Method (LBM) for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution) and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.
Non-Destructive Evaluation Method Based On Dynamic Invariant Stress Resultants
Directory of Open Access Journals (Sweden)
Zhang Junchi
2015-01-01
Full Text Available Most of the vibration based damage detection methods are based on changes in frequencies, mode shapes, mode shape curvature, and flexibilities. These methods are limited and typically can only detect the presence and location of damage. Current methods seldom can identify the exact severity of damage to structures. This paper will present research in the development of a new non-destructive evaluation method to identify the existence, location, and severity of damage for structural systems. The method utilizes the concept of invariant stress resultants (ISR. The basic concept of ISR is that at any given cross section the resultant internal force distribution in a structural member is not affected by the inflicted damage. The method utilizes dynamic analysis of the structure to simulate direct measurements of acceleration, velocity and displacement simultaneously. The proposed dynamic ISR method is developed and utilized to detect the damage of corresponding changes in mass, damping and stiffness. The objectives of this research are to develop the basic theory of the dynamic ISR method, apply it to the specific types of structures, and verify the accuracy of the developed theory. Numerical results that demonstrate the application of the method will reflect the advanced sensitivity and accuracy in characterizing multiple damage locations.
Directory of Open Access Journals (Sweden)
Sean Zeiger
2017-06-01
Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.
Radioimmunological determination of plasma progesterone. Methods - Results - Indications
International Nuclear Information System (INIS)
Gonon-Estrangin, Chantal.
1978-10-01
The aim of this work is to describe the radioimmunological determination of plasma progesterone carried out at the hormonology Laboratory of the Grenoble University Hospital Centre (Professor E. Chambaz), to compare our results with those of the literature and to present the main clinical indications of this analysis. The measurement method has proved reproducible, specific (the steroid purification stage is unnecessary) and sensitive (detection: 10 picograms of progesterone per tube). In seven normally menstruating women our results agree with published values: (in nanograms per millilitre: ng/ml) 0.07 ng/ml to 0.9 ng/ml in the follicular phase, from the start of menstruation until ovulation, then rapid increase at ovulation with a maximum in the middle of the luteal phase (our values for this maximum range from 7.9 ng/ml to 21.7 ng/ml) and gradual drop in progesterone secretion until the next menstrual period. In gynecology the radioimmunoassay of plasma progesterone is valuable for diagnostic and therapeutic purposes: - to diagnosis the absence of corpus luteum, - to judge the effectiveness of an ovulation induction treatment [fr
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
A hierarchy of models for simulating experimental results from a 3D heterogeneous porous medium
Vogler, Daniel; Ostvar, Sassan; Paustian, Rebecca; Wood, Brian D.
2018-04-01
In this work we examine the dispersion of conservative tracers (bromide and fluorescein) in an experimentally-constructed three-dimensional dual-porosity porous medium. The medium is highly heterogeneous (σY2 = 5.7), and consists of spherical, low-hydraulic-conductivity inclusions embedded in a high-hydraulic-conductivity matrix. The bimodal medium was saturated with tracers, and then flushed with tracer-free fluid while the effluent breakthrough curves were measured. The focus for this work is to examine a hierarchy of four models (in the absence of adjustable parameters) with decreasing complexity to assess their ability to accurately represent the measured breakthrough curves. The most information-rich model was (1) a direct numerical simulation of the system in which the geometry, boundary and initial conditions, and medium properties were fully independently characterized experimentally with high fidelity. The reduced-information models included; (2) a simplified numerical model identical to the fully-resolved direct numerical simulation (DNS) model, but using a domain that was one-tenth the size; (3) an upscaled mobile-immobile model that allowed for a time-dependent mass-transfer coefficient; and, (4) an upscaled mobile-immobile model that assumed a space-time constant mass-transfer coefficient. The results illustrated that all four models provided accurate representations of the experimental breakthrough curves as measured by global RMS error. The primary component of error induced in the upscaled models appeared to arise from the neglect of convection within the inclusions. We discuss the necessity to assign value (via a utility function or other similar method) to outcomes if one is to further select from among model options. Interestingly, these results suggested that the conventional convection-dispersion equation, when applied in a way that resolves the heterogeneities, yields models with high fidelity without requiring the imposition of a more
Directory of Open Access Journals (Sweden)
Xiaoming Zha
2016-11-01
Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.
FDTD method using for electrodynamic simulation of resonator accelerating structures
International Nuclear Information System (INIS)
Vorogushin, M.F.; Svistunov, Yu.A.; Chetverikov, I.O.; Malyshev, V.N.; Malyukhov, M.V.
2000-01-01
The finite difference method in the time area (FDTD) makes it possible to model both stationary and nonstationary processes, originating by the beam and field interaction. Possibilities of the method by modeling the fields in the resonant accelerating structures are demonstrated. The possibility of considering the transition processes is important besides the solution of the problem on determination of frequencies and distribution in the space of the resonators oscillations proper types. The program presented makes it possible to obtain practical results for modeling accelerating structures on personal computers [ru
Lesion insertion in the projection domain: Methods and initial results
International Nuclear Information System (INIS)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia
2015-01-01
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Lesion insertion in the projection domain: Methods and initial results
Energy Technology Data Exchange (ETDEWEB)
Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-12-15
Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically
Some GCM simulation results on present and possible future climate in northern Europe
Energy Technology Data Exchange (ETDEWEB)
Raeisaenen, J [Helsinki Univ. (Finland). Dept. of Meteorology
1996-12-31
The Intergovernmental Panel on Climate Change initiated in 1993 a project entitled `Evaluation of Regional Climate Simulations`. The two basic aims of this project were to assess the skill of current general circulation models (GCMs) in simulating present climate at a regional level and to intercompare the regional response of various GCMs to increased greenhouse gas concentrations. The public data base established for the comparison included simulation results from several modelling centres, but most of the data were available in the form of time-averaged seasonal means only, and important quantities like precipitation were totally lacking in many cases. This presentation summarizes the intercomparison results for surface air temperature and sea level pressure in northern Europe. The quality of the control simulations and the response of the models to increased CO{sub 2} are addressed in both winter (December-February) and summer (June-August)
Some GCM simulation results on present and possible future climate in northern Europe
Energy Technology Data Exchange (ETDEWEB)
Raeisaenen, J. [Helsinki Univ. (Finland). Dept. of Meteorology
1995-12-31
The Intergovernmental Panel on Climate Change initiated in 1993 a project entitled `Evaluation of Regional Climate Simulations`. The two basic aims of this project were to assess the skill of current general circulation models (GCMs) in simulating present climate at a regional level and to intercompare the regional response of various GCMs to increased greenhouse gas concentrations. The public data base established for the comparison included simulation results from several modelling centres, but most of the data were available in the form of time-averaged seasonal means only, and important quantities like precipitation were totally lacking in many cases. This presentation summarizes the intercomparison results for surface air temperature and sea level pressure in northern Europe. The quality of the control simulations and the response of the models to increased CO{sub 2} are addressed in both winter (December-February) and summer (June-August)
Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process
International Nuclear Information System (INIS)
Nishimura, Akihiko
1995-01-01
The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)
A new simulation method for turbines in wake - Applied to extreme response during operation
DEFF Research Database (Denmark)
Thomsen, K.; Aagaard Madsen, H.
2005-01-01
The work focuses on prediction of load response for wind turbines operating in wind forms using a newly developed aeroelostic simulation method The traditionally used concept is to adjust the free flow turbulence intensity to account for increased loads in wind farms-a methodology that might......, the resulting extremes might be erroneous. For blade loads the traditionally used simplified approach works better than for integrated rotor loads-where the instantaneous load gradient across the rotor disc is causing the extreme loads. In the article the new wake simulation approach is illustrated...
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations
Directory of Open Access Journals (Sweden)
Mingyuan Hu
2015-01-01
Full Text Available Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment, and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1 spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2 multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3 dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Hummels, Cameron
Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic
Monte Carlo simulation of a TRIGA source driven core configuration: Preliminary results
International Nuclear Information System (INIS)
Burgio, N.; Ciavola, C.; Santagata, A.
2002-01-01
The different core configurations with a k eff ranging from 0.93 to 0.98, and their response when driven by a pulsed neutron source were simulated with MCNP4C3 (Los Alamos - Monte Carlo N Particles). Simulation results could be considered both as preliminary check for nuclear data and a conceptual design for 'source jerk' experiments on the frame of TRIGA Accelerator Driven Experiment (TRADE) on the reactor facility of Casaccia research center. (author)
Design and CFD Simulation of the Drift Eliminators in Comparison with PIV Results
Directory of Open Access Journals (Sweden)
Stodůlka Jiří
2015-01-01
Full Text Available Drift eliminators are the essential part of all modern cooling towers preventing significant losses of liquid water escaping to the enviroment. These eliminators need to be effective in terms of water capture but on the other hand causing only minimal pressure loss as well. A new type of such eliminator was designed and numerically simulated using CFD tools. Results of the simulation are compared with PIV visulisation on the prototype model.
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
Flow simulation of a Pelton bucket using finite volume particle method
International Nuclear Information System (INIS)
Vessaz, C; Jahanbakhsh, E; Avellan, F
2014-01-01
The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets
International Nuclear Information System (INIS)
Koyagoshi, Naoki; Sasaki, Kazuichi; Sawada, Makoto; Kawanishi, Tomotake; Yoshida, Kazuo
2003-09-01
The prototype fast breeder reactor, Monju, has been performing deliberately the operator training which is composed of the regulated training required by the government and the self-training. The training used a full scope type simulator (MARS: Monju Advanced Reactor Simulator) plays an important role among of the above mentioned trainings and greatly contributes to the Monju operator training for Monju restarting. This report covers the activities of Monju operator training in 2002 FY, i.e. the training results and the remodeling working of the MARS in progress since 1999. (1) Eight simulator training courses were carried out 46 times and 180 trainees participated. Additionally, both the regulated training and self-training were held total 10 times by attended 34 trainees, as besides simulator training. (2) Above training data was reduced compare with the last year's data (69 times (338 trainees)) due to the indispensable training courses in Monju operator training were changed by reorganized operator's number and decreasing of training times owing to remodeling working of the simulator was conducted. (3) By means of upgrading of the MARS completed in 2002 FY, its logic arithmetic time was became speedier and its instructing function was improved remarkably, thus, the simulator training was became to be more effective. Moreover, it's planning to do both remodeling in the next year as the final working: remodeling of reactor core model with the aim of improvement simulating accuracy and corresponding to the sodium leakage measures. Regarding on the Monju training results and simulator's remodeling so far finished, please referring JNC report number of JNC TN 4410 2002-001 Translation of Monju Simulator Training owing Monju Accident and Upgrade of MARS''. (author)
Numerical methods for the simulation of continuous sedimentation in ideal clarifier-thickener units
Energy Technology Data Exchange (ETDEWEB)
Buerger, R.; Karlsen, K.H.; Risebro, N.H.; Towers, J.D.
2001-10-01
We consider a model of continuous sedimentation. Under idealizing assumptions, the settling of the solid particles under the influence of gravity can be described by the initial value problem for a nonlinear hyperbolic partial differential equation with a flux function that depends discontinuously on height. The purpose of this contribution is to present and demonstrate two numerical methods for simulating continuous sedimentation: a front tracking method and a finite finite difference method. The basic building blocks in the front tracking method are the solutions of a finite number of certain Riemann problems and a procedure for tracking local collisions of shocks. The solutions of the Riemann problems are recalled herein and the front tracking algorithm is described. As an alternative to the front tracking method, a simple scalar finite difference algorithm is proposed. This method is based on discretizing the spatially varying flux parameters on a mesh that is staggered with respect to that of the conserved variable, resulting in a straightforward generalization of the well-known Engquist-Osher upwind finite difference method. The result is an easily implemented upwind shock capturing method. Numerical examples demonstrate that the front tracking and finite difference methods can be used as efficient and accurate simulation tools for continuous sedimentation. The numerical results for the finite difference method indicate that discontinuities in the local solids concentration are resolved sharply and agree with those produced by the front tracking method. The latter is free of numerical dissipation, which leads to sharply resolved concentration discontinuities, but is more complicated to implement than the former. Available mathematical results for the proposed numerical methods are also briefly reviewed. (author)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
Peach Bottom Turbine Trip Simulations with RETRAN Using INER/TPC BWR Transient Analysis Method
International Nuclear Information System (INIS)
Kao Lainsu; Chiang, Show-Chyuan
2005-01-01
The work described in this paper is benchmark calculations of pressurization transient turbine trip tests performed at the Peach Bottom boiling water reactor (BWR). It is part of an overall effort in providing qualification basis for the INER/TPC BWR transient analysis method developed for the Kuosheng and Chinshan plants. The method primarily utilizes an advanced system thermal hydraulics code, RETRAN02/MOD5, for transient safety analyses. Since pressurization transients would result in a strong coupling effect between core neutronic and system thermal hydraulics responses, the INER/TPC method employs the one-dimensional kinetic model in RETRAN with a cross-section data library generated by the Studsvik-CMS code package for the transient calculations. The Peach Bottom Turbine Trip (PBTT) tests, including TT1, TT2, and TT3, have been successfully performed in the plant and assigned as standards commonly for licensing method qualifications for years. It is an essential requirement for licensing purposes to verify integral capabilities and accuracies of the codes and models of the INER/TPC method in simulating such pressurization transients. Specific Peach Bottom plant models, including both neutronics and thermal hydraulics, are developed using modeling approaches and experiences generally adopted in the INER/TPC method. Important model assumptions in RETRAN for the PBTT test simulations are described in this paper. Simulation calculations are performed with best-estimated initial and boundary conditions obtained from plant test measurements. The calculation results presented in this paper demonstrate that the INER/TPC method is capable of calculating accurately the core and system transient behaviors of the tests. Excellent agreement, both in trends and magnitudes between the RETRAN calculation results and the PBTT measurements, shows reliable qualifications of the codes/users/models involved in the method. The RETRAN calculated peak neutron fluxes of the PBTT
Daru, R.; Venemans, P.
1998-01-01
Visualisation, simulation and communication were always intimately interconnected. Visualisations and simulations impersonate existing or virtual realities. Without those tools it is arduous to communicate mental depictions about virtual objects and events. A communication model is presented to
Face-based smoothed finite element method for real-time simulation of soft tissue
Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane
2017-03-01
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.
Directory of Open Access Journals (Sweden)
Mondry Adrian
2004-08-01
Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods
Directory of Open Access Journals (Sweden)
Martin Hofmann
2017-09-01
Full Text Available We analyze the output of various state-of-the-art irradiance models for photovoltaic systems. The models include two sun position algorithms, three types of input data time series, nine diffuse fraction models and five transposition models (for tilted surfaces, resulting in 270 different model chains for the photovoltaic (PV system simulation. These model chains are applied to 30 locations worldwide and three different module tracking types, totaling in 24,300 simulations. We show that the simulated PV yearly energy output varies between −5% and +8% for fixed mounted PV modules and between −26% and +14% for modules with two-axis tracking. Model quality varies strongly between locations; sun position algorithms have negligible influence on the simulation results; diffuse fraction models add a lot of variability; and transposition models feature the strongest influence on the simulation results. To highlight the importance of irradiance with high temporal resolution, we present an analysis of the influence of input temporal resolution and simulation models on the inverter clipping losses at varying PV system sizing factors for Lindenberg, Germany. Irradiance in one-minute resolution is essential for accurately calculating inverter clipping losses.
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
A novel method for assessing elbow pain resulting from epicondylitis
Polkinghorn, Bradley S.
2002-01-01
Abstract Objective To describe a novel orthopedic test (Polk's test) which can assist the clinician in differentiating between me- dial and lateral epicondylitis, 2 of the most common causes of elbow pain. This test has not been previously described in the literature. Clinical Features The testing procedure described in this paper is easy to learn, simple to perform and may provide the clinician with a quick and effective method of differentiating between lateral and medial epicondylitis. The test also helps to elucidate normal activities of daily living that the patient may unknowingly be performing on a repetitive basis that are hindering recovery. The results of this simple test allow the clinician to make immediate lifestyle recommendations to the patient that should improve and hasten the response to subsequent treatment. It may be used in conjunction with other orthopedic testing procedures, as it correlates well with other clinical tests for assessing epicondylitis. Conclusion The use of Polk's Test may help the clinician to diagnostically differentiate between lateral and medial epicondylitis, as well as supply information relative to choosing proper instructions for the patient to follow as part of their treatment program. Further research, performed in an academic setting, should prove helpful in more thoroughly evaluating the merits of this test. In the meantime, clinical experience over the years suggests that the practicing physician should find a great deal of clinical utility in utilizing this simple, yet effective, diagnostic procedure. PMID:19674572
Hasegawa, Takanori; Nagasaki, Masao; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2014-07-01
Recently, several biological simulation models of, e.g., gene regulatory networks and metabolic pathways, have been constructed based on existing knowledge of biomolecular reactions, e.g., DNA-protein and protein-protein interactions. However, since these do not always contain all necessary molecules and reactions, their simulation results can be inconsistent with observational data. Therefore, improvements in such simulation models are urgently required. A previously reported method created multiple candidate simulation models by partially modifying existing models. However, this approach was computationally costly and could not handle a large number of candidates that are required to find models whose simulation results are highly consistent with the data. In order to overcome the problem, we focused on the fact that the qualitative dynamics of simulation models are highly similar if they share a certain amount of regulatory structures. This indicates that better fitting candidates tend to share the basic regulatory structure of the best fitting candidate, which can best predict the data among candidates. Thus, instead of evaluating all candidates, we propose an efficient explorative method that can selectively and sequentially evaluate candidates based on the similarity of their regulatory structures. Furthermore, in estimating the parameter values of a candidate, e.g., synthesis and degradation rates of mRNA, for the data, those of the previously evaluated candidates can be utilized. The method is applied here to the pharmacogenomic pathways for corticosteroids in rats, using time-series microarray expression data. In the performance test, we succeeded in obtaining more than 80% of consistent solutions within 15% of the computational time as compared to the comprehensive evaluation. Then, we applied this approach to 142 literature-recorded simulation models of corticosteroid-induced genes, and consequently selected 134 newly constructed better models. The
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Simulation as a Method of Teaching Communication for Multinational Corporations.
Stull, James B.; Baird, John W.
Interpersonal simulations may be used as a module in cultural awareness programs to provide realistic environments in which students, supervisors, and managers may practice communication skills that are effective in multicultural environments. To conduct and implement a cross-cultural simulation, facilitators should proceed through four stages:…
Comparison between the performance of some KEK-klystrons and simulation results
Energy Technology Data Exchange (ETDEWEB)
Fukuda, Shigeki [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan)
1997-04-01
Recent developments of various klystron simulation codes have enabled us to realistically design klystrons. This paper presents various simulation results using the FCI code and the performances of tubes manufactured based on this code. Upgrading a 30-MW S-band klystron and developing a 50-MW S-band klystron for the KEKB projects are successful examples based on FCI-code predictions. Mass-productions of these tubes have already started. On the other hand, a discrepancy has been found between the FCI simulation results and the performance of real tubes. In some cases, the simulation results lead to high-efficiency results, while manufactured tubes show the usual value, or a lower value, of the efficiency. One possible cause may come from a data mismatch between the electron-gun simulation and the input data set of the FCI code for the gun region. This kind of discrepancy has been observed in 30-MW S-band pulsed tubes, sub-booster pulsed tubes and L-band high-duty pulsed klystrons. Sometimes, JPNDSK (one-dimensional disk-model code) gives similar results. Some examples using the FCI code are given in this article. An Arsenal-MSU code could be applied to the 50-MW klystron under collaboration with Moscow State University; a good agreement has been found between the prediction of the code and performance. (author)
Initial quality performance results using a phantom to simulate chest computed radiography
Directory of Open Access Journals (Sweden)
Muhogora Wilbroad
2011-01-01
Full Text Available The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR. The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al at a source image distance (SID of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
International Nuclear Information System (INIS)
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-01-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Chaturvedi, K.; Willenborg, B.; Sindram, M.; Kolbe, T. H.
2017-10-01
Semantic 3D city models play an important role in solving complex real-world problems and are being adopted by many cities around the world. A wide range of application and simulation scenarios directly benefit from the adoption of international standards such as CityGML. However, most of the simulations involve properties, whose values vary with respect to time, and the current generation semantic 3D city models do not support time-dependent properties explicitly. In this paper, the details of solar potential simulations are provided operating on the CityGML standard, assessing and estimating solar energy production for the roofs and facades of the 3D building objects in different ways. Furthermore, the paper demonstrates how the time-dependent simulation results are better-represented inline within 3D city models utilizing the so-called Dynamizer concept. This concept not only allows representing the simulation results in standardized ways, but also delivers a method to enhance static city models by such dynamic property values making the city models truly dynamic. The dynamizer concept has been implemented as an Application Domain Extension of the CityGML standard within the OGC Future City Pilot Phase 1. The results are given in this paper.
Comparison of microstickies measurement methods. Part II, Results and discussion
Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang
2003-01-01
In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 Â°C to 65 Â°C in other cases. Sample size ranges from as low as...
Directory of Open Access Journals (Sweden)
Pilarski Michał
2015-09-01
Full Text Available The main source of information about future climate changes are the results of numerical simulations performed in scientific institutions around the world. Present projections from global circulation models (GCMs are too coarse and are only usefulness for the world, hemisphere or continent spatial analysis. The low horizontal resolution of global models (100–200 km, does not allow to assess climate changes at regional or local scales. Therefore it is necessary to lead studies concerning how to detail the GCMs information. The problem of information transfer from the GCMs to higher spatial scale solve: dynamical and statistical downscaling. The dynamical downscaling method based on “nesting” global information in a regional models (RCMs, which solve the equations of motion and the thermodynamic laws in a small spatial scale (10–50 km. However, the statistical downscaling models (SDMs identify the relationship between large-scale variable (predictor and small-scale variable (predictand implementing linear regression. The main goal of the study was to compare the global model scenarios of thermal condition in Poland in XXI century with the more accurate statistical and dynamical regional models outcomes. Generally studies confirmed usefulness of statistical downscaling to detail information from GCMs. Basic results present that regional models captured local aspects of thermal conditions variability especially in coastal zone.
The review and results of different methods for facial recognition
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Method for simulating dose reduction in digital mammography using the Anscombe transformation
International Nuclear Information System (INIS)
Borges, Lucas R.; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C.; Bakic, Predrag R.; Maidment, Andrew D. A.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Energy Technology Data Exchange (ETDEWEB)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C. [Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, 400 Trabalhador São-Carlense Avenue, São Carlos 13566-590 (Brazil); Bakic, Predrag R.; Maidment, Andrew D. A. [Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, 3400 Spruce Street, Philadelphia, Pennsylvania 19104 (United States)
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe
Test results of the new NSSS thermal-hydraulics program of the KNPEC-2 simulator
International Nuclear Information System (INIS)
Jeong, J. Z.; Kim, K. D.; Lee, M. S.; Hong, J. H.; Lee, Y. K.; Seo, J. S.; Kweon, K. J.; Lee, S. W.
2001-01-01
As a part of the KNPEC-2 Simulator Upgrade Project, KEPRI and KAERI have developed a new NSSS thermal-hydraulics program, which is based on the best-estimate system code, RETRAN. The RETRAN code was originally developed for realistic simulation of thermal-hydraulic transient in power plant systems. The capability of 'real-time simulation' and robustness' should be first developed before being implemented in full-scope simulators. For this purpose, we have modified the RETRAN code by (i) eliminating the correlations' discontinuities between flow regime maps, (ii) simplifying physical correlations, (iii) correcting errors in the original program, and (iv) others. This paper briefly presents the test results fo the new NSSS thermal-hydraulics program
2D and 3D core-collapse supernovae simulation results obtained with the CHIMERA code
Energy Technology Data Exchange (ETDEWEB)
Bruenn, S W; Marronetti, P; Dirk, C J [Physics Department, Florida Atlantic University, 777 W. Glades Road, Boca Raton, FL 33431-0991 (United States); Mezzacappa, A; Hix, W R [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Blondin, J M [Department of Physics, North Carolina State University, Raleigh, NC 27695-8202 (United States); Messer, O E B [Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6354 (United States); Yoshida, S, E-mail: bruenn@fau.ed [Max-Planck-Institut fur Gravitationsphysik, Albert Einstein Institut, Golm (Germany)
2009-07-01
Much progress in realistic modeling of core-collapse supernovae has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements are enabling simulations with enough realism that the explosion mechanism, long a mystery, may soon be delineated. We briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We then describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25 M{sub o-dot} progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 20,000 km. We also briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15 M{sub o-dot} progenitor.
Study on driver model for hybrid truck based on driving simulator experimental results
Directory of Open Access Journals (Sweden)
Dam Hoang Phuc
2018-04-01
Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study
Directory of Open Access Journals (Sweden)
In Sung Cho
2017-08-01
Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.
Application of accelerated simulation method on NPN bipolar transistors of different technology
International Nuclear Information System (INIS)
Fei Wuxiong; Zheng Yuzhan; Wang Yiyuan; Chen Rui; Li Maoshun; Lan Bo; Cui Jiangwei; Zhao Yun; Lu Wu; Ren Diyuan; Wang Zhikuan; Yang Yonghui
2010-01-01
With different radiation methods, ionizing radiation response of NPN bipolar transistors of six different processes was investigated. The results show that the enhanced low dose rate sensitivity obviously exists in NPN bipolar transistors of the six kinds of processes. According to the experiment, the damage of decreasing temperature in step during irradiation is obviously greater than the result of irradiated at high dose rate. This irradiation method can perfectly simulate and conservatively evaluate low dose rate damage, which is of great significance to radiation effects research of bipolar devices. Finally, the mechanisms of the experimental phenomena were analyzed. (authors)
Resource costing for multinational neurologic clinical trials: methods and results.
Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H
1998-11-01
We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.
Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.
2018-01-01
Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.
International Nuclear Information System (INIS)
Ito, Akihiko; Sasai, Takahiro
2006-01-01
This study addressed how different climate data sets influence simulations of the global terrestrial carbon cycle. For the period 1982-2001, we compared the results of simulations based on three climate data sets (NCEP/NCAR, NCEP/DOE AMIP-II and ERA40) employed in meteorological, ecological and biogeochemical studies and two different models (BEAMS and Sim-CYCLE). The models differed in their parameterizations of photosynthetic and phenological processes but used the same surface climate (e.g. shortwave radiation, temperature and precipitation), vegetation, soil and topography data. The three data sets give different climatic conditions, especially for shortwave radiation, in terms of long-term means, linear trends and interannual variability. Consequently, the simulation results for global net primary productivity varied by 16%-43% only from differences in the climate data sets, especially in these regions where the shortwave radiation data differed markedly: differences in the climate data set can strongly influence simulation results. The differences among the climate data set and between the two models resulted in slightly different spatial distribution and interannual variability in the net ecosystem carbon budget. To minimize uncertainty, we should pay attention to the specific climate data used. We recommend developing an accurate standard climate data set for simulation studies
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
Electrostatic plasma simulation by Particle-In-Cell method using ANACONDA package
International Nuclear Information System (INIS)
Blandón, J S; Grisales, J P; Riascos, H
2017-01-01
Electrostatic plasma is the most representative and basic case in plasma physics field. One of its main characteristics is its ideal behavior, since it is assumed be in thermal equilibrium state. Through this assumption, it is possible to study various complex phenomena such as plasma oscillations, waves, instabilities or damping. Likewise, computational simulation of this specific plasma is the first step to analyze physics mechanisms on plasmas, which are not at equilibrium state, and hence plasma is not ideal. Particle-In-Cell (PIC) method is widely used because of its precision for this kind of cases. This work, presents PIC method implementation to simulate electrostatic plasma by Python, using ANACONDA packages. The code has been corroborated comparing previous theoretical results for three specific phenomena in cold plasmas: oscillations, Two-Stream instability (TSI) and Landau Damping(LD). Finally, parameters and results are discussed. (paper)
Nuclear power plant training simulator system and method
International Nuclear Information System (INIS)
Ferguson, R.W.; Converse, R.E. Jr.
1975-01-01
A system is described for simulating the real-time dynamic operation of a full scope nuclear powered electrical generating plant for operator training utilizing apparatus that includes a control console with plant component control devices and indicating devices for monitoring plant operation. A general purpose digital computer calculates the dynamic simulation data for operating the indicating devices in accordance with the operation of the control devices. The functions for synchronization and calculation are arranged in a priority structure so as to insure an execution order that provides a maximum overlap of data exchange and simulation calculations. (Official Gazette)
Discrete simulation system based on artificial intelligence methods
Energy Technology Data Exchange (ETDEWEB)
Futo, I; Szeredi, J
1982-01-01
A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
International Nuclear Information System (INIS)
Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.
2010-01-01
Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.
International Nuclear Information System (INIS)
Sharifi, M. J.; Adibi, A.
2000-01-01
In this paper, we have extended and completed our previous work, that was introducing a new method for finite differentiation. We show the applicability of the method for solving a wide variety of equations such as poisson, Laplace and Schrodinger. These equations are fundamental to the most semiconductor device simulators. In a section, we solve the Shordinger equation by this method in several cases including the problem of finding electron concentration profile in the channel of a HEMT. In another section, we solve the Poisson equation by this method, choosing the problem of SBD as an example. Finally we solve the Laplace equation in two dimensions and as an example, we focus on the VED. In this paper, we have shown that, the method can get stable and precise results in solving all of these problems. Also the programs which have been written based on this method become considerably faster, more clear, and more abstract
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'
DEFF Research Database (Denmark)
de Nijs, Robin
2015-01-01
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...
Directory of Open Access Journals (Sweden)
Domingos M. C. Rodrigues
2017-12-01
Full Text Available Conventional pathogen detection methods require trained personnel, specialized laboratories and can take days to provide a result. Thus, portable biosensors with rapid detection response are vital for the current needs for in-loco quality assays. In this work the authors analyze the characteristics of an immunosensor based on the evanescent field in plastic optical fibers with macro curvature by comparing experimental with simulated results. The work studies different shapes of evanescent-wave based fiber optic sensors, adopting a computational modeling to evaluate the probes with the best sensitivity. The simulation showed that for a U-Shaped sensor, the best results can be achieved with a sensor of 980 µm diameter by 5.0 mm in curvature for refractive index sensing, whereas the meander-shaped sensor with 250 μm in diameter with radius of curvature of 1.5 mm, showed better sensitivity for either bacteria and refractive index (RI sensing. Then, an immunosensor was developed, firstly to measure refractive index and after that, functionalized to detect Escherichia coli. Based on the results with the simulation, we conducted studies with a real sensor for RI measurements and for Escherichia coli detection aiming to establish the best diameter and curvature radius in order to obtain an optimized sensor. On comparing the experimental results with predictions made from the modelling, good agreements were obtained. The simulations performed allowed the evaluation of new geometric configurations of biosensors that can be easily constructed and that promise improved sensitivity.
Rodrigues, Domingos M C; Lopes, Rafaela N; Franco, Marcos A R; Werneck, Marcelo M; Allil, Regina C S B
2017-12-19
Conventional pathogen detection methods require trained personnel, specialized laboratories and can take days to provide a result. Thus, portable biosensors with rapid detection response are vital for the current needs for in-loco quality assays. In this work the authors analyze the characteristics of an immunosensor based on the evanescent field in plastic optical fibers with macro curvature by comparing experimental with simulated results. The work studies different shapes of evanescent-wave based fiber optic sensors, adopting a computational modeling to evaluate the probes with the best sensitivity. The simulation showed that for a U-Shaped sensor, the best results can be achieved with a sensor of 980 µm diameter by 5.0 mm in curvature for refractive index sensing, whereas the meander-shaped sensor with 250 μm in diameter with radius of curvature of 1.5 mm, showed better sensitivity for either bacteria and refractive index (RI) sensing. Then, an immunosensor was developed, firstly to measure refractive index and after that, functionalized to detect Escherichia coli . Based on the results with the simulation, we conducted studies with a real sensor for RI measurements and for Escherichia coli detection aiming to establish the best diameter and curvature radius in order to obtain an optimized sensor. On comparing the experimental results with predictions made from the modelling, good agreements were obtained. The simulations performed allowed the evaluation of new geometric configurations of biosensors that can be easily constructed and that promise improved sensitivity.
Numerical simulation of jet breakup behavior by the lattice Boltzmann method
International Nuclear Information System (INIS)
Matsuo, Eiji; Koyama, Kazuya; Abe, Yutaka; Iwasawa, Yuzuru; Ebihara, Ken-ichi
2015-01-01
In order to understand the jet breakup behavior of the molten core material into coolant during a core disruptive accident (CDA) for a sodium-cooled fast reactor (SFR), we simulated the jet breakup due to the hydrodynamic interaction using the lattice Boltzmann method (LBM). The applicability of the LBM to the jet breakup simulation was validated by comparison with our experimental data. In addition, the influence of several dimensionless numbers such as Weber number and Froude number was examined using the LBM. As a result, we validated applicability of the LBM to the jet breakup simulation, and found that the jet breakup length is independent of Froude number and in good agreement with the Epstein's correlation when the jet interface becomes unstable. (author)
Method of transport simulation for electrons between 10eV and 30keV
International Nuclear Information System (INIS)
Terrissol, Michel.
1978-01-01
A transport simulation of low energy electrons in matter using a Monte-Carlo method and studying all the interactions of the electrons with atoms, molecules or assembly of them is described. Elastic scattering, ionization, excitation, plasmon creation, reorganization following inner-shell ionization, electron-hole pair creation ... are simulated individually by sampling of confirmed experimental or theoretical cross sections. So atomic and molecular gases, metals such as aluminium and liquid water have been studied. The simulation allows to follow the electrons until their energy reaches the atomic or molecular ionization potential of the irradiated matter. The entire trajectories of primary electron and of all secondaries set in motion are exactly reproduced. Several applications to multiple scattering, radiobiology, microdosimetry, electronic microscope are represented and some results are directly compared with experimental ones [fr
Modified enthalpy method for the simulation of melting and ...
Indian Academy of Sciences (India)
These include the implicit time stepping method of Voller & Cross. (1981), explicit enthalpy method of Tacke (1985), centroidal temperature correction method ... In variable viscosity method, viscosity is written as a function of liquid fraction.
First experimental results and simulation for gas optimisation of the MART-LIME detector
International Nuclear Information System (INIS)
Bazzano, A.; Brunetti, M.T.; Cocchi, M.; Hall, C.J.; Lewis, R.A.; Natalucci, L.; Ortuno-Prados, F.; Ubertini, P.
1996-01-01
A large area high pressure multi-wire proportional counter (MWPC), with both spatial and spectroscopic capabilities, is being jointly developed by the Istituto di Astrofisica Spaziale (IAS), CNR, Frascati, Italy and the Daresbury Laboratory (DL), Warrington, UK as part of the MART-LIME telescope. Recent test results (October-December 1995) carried out at the DL facilities are presented. A brief study, by means of a simulation program, on the possible gas mixtures to be employed in the MART-LIME detector is also reported. The results of the simulation are compared with the experimental data obtained from the tests. (orig.)
Simulation and Analysis of Microwave Transmission through an Electron Cloud, a Comparison of Results
International Nuclear Information System (INIS)
Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John
2007-01-01
Simulation studies for transmission of microwaves through electron clouds show good agreement with analytic results. The electron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for accessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab main injector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations
Energy Technology Data Exchange (ETDEWEB)
Chai, Penghui, E-mail: phchai@vis.t.u-tokyo.ac.jp; Kondo, Masahiro; Erkan, Nejdet; Okamoto, Koji
2016-05-15
Highlights: • Multiphysics models were developed based on Moving Particle Semi-implicit method. • Mixing process, chemical reaction can be simulated in MCCI calculation. • CCI-2 experiment was simulated to validate the models. • Simulation and experimental results for sidewall ablation agree well. • Simulation results confirm the rapid erosion phenomenon observed in the experiment. - Abstract: Numerous experiments have been performed to explore the mechanisms of molten core-concrete interaction (MCCI) phenomena since the 1980s. However, previous experimental results show that uncertainties pertaining to several aspects such as the mixing process and crust behavior remain. To explore the mechanism governing such aspects, as well as to predict MCCI behavior in real severe accident events, a number of simulation codes have been developed for process calculations. However, uncertainties exist among the codes because of the use of different empirical models. In this study, a new computational code is developed using multiphysics models to simulate MCCI phenomena based on the moving particle semi-implicit (MPS) method. Momentum and energy equations are used to solve the velocity and temperature fields, and multiphysics models are developed on the basis of the basic MPS method. The CCI-2 experiment is simulated by applying the developed code. With respect to sidewall ablation, good agreement is observed between the simulation and experimental results. However, axial ablation is slower in the simulation, which is probably due to the underestimation of the enhancement effect of heat transfer provided by the moving bubbles at the bottom. In addition, the simulation results confirm the rapid erosion phenomenon observed in the experiment, which in the numerical simulation is explained by solutal convection provided by the liquid concrete at the corium/concrete interface. The results of the comparison of different model combinations show the effect of each
Review of Vortex Methods for Simulation of Vortex Breakdown
National Research Council Canada - National Science Library
Levinski, Oleg
2001-01-01
The aim of this work is to identify current developments in the field of vortex breakdown modelling in order to initiate the development of a numerical model for the simulation of F/A-18 empennage buffet...
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang
2012-01-01
Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related
New methods for simulation of fractional Brownian motion
International Nuclear Information System (INIS)
Yin, Z.M.
1996-01-01
We present new algorithms for simulation of fractional Brownian motion (fBm) which comprises a set of important random functions widely used in geophysical and physical modeling, fractal image (landscape) simulating, and signal processing. The new algorithms, which are both accurate and efficient, allow us to generate not only a one-dimensional fBm process, but also two- and three-dimensional fBm fields. 23 refs., 3 figs
NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.
Energy Technology Data Exchange (ETDEWEB)
LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.
2005-09-12
Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.
Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G
2015-01-01
In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Sakamoto, Shinichi; Otsuru, Toru
2014-01-01
This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.
Development and simulation of various methods for neutron activation analysis
International Nuclear Information System (INIS)
Otgooloi, B.
1993-01-01
Simple methods for neutron activation analysis have been developed. The results on the studies of installation for determination of fluorine in fluorite ores directly on the lorry by fast neutron activation analysis have been shown. Nitrogen in organic materials was shown by N 14 and N 15 activation. The description of the new equipment 'FLUORITE' for fluorate factory have been shortly given. Pu and Be isotope in organic materials, including in wheat, was measured. 25 figs, 19 tabs. (Author, Translated by J.U)
Study on the Growth of Holes in Cold Spraying via Numerical Simulation and Experimental Methods
Directory of Open Access Journals (Sweden)
Guosheng Huang
2016-12-01
Full Text Available Cold spraying is a promising method for rapid prototyping due to its high deposition efficiency and high-quality bonding characteristic. However, many researchers have noticed that holes cannot be replenished and will grow larger and larger once formed, which will significantly decrease the deposition efficiency. No work has yet been done on this problem. In this paper, a computational simulation method was used to investigate the origins of these holes and the reasons for their growth. A thick copper coating was deposited around the pre-drilled, micro-size holes using a cold spraying method on copper substrate to verify the simulation results. The results indicate that the deposition efficiency inside the hole decreases as the hole become deeper and narrower. The repellant force between the particles perpendicular to the impaction direction will lead to porosity if the particles are too close. There is a much lower flattening ratio for successive particles if they are too close at the same location, because the momentum energy contributes to the former particle’s deformation. There is a high probability that the above two phenomena, resulting from high powder-feeding rate, will form the original hole, which will grow larger and larger once it is formed. It is very important to control the powder feeding rate, but the upper limit is yet to be determined by further simulation and experimental investigation.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
Simulation training for emergency obstetric and neonatal care in Senegal preliminary results.
Gueye, M; Moreira, P M; Faye-Dieme, M E; Ndiaye-Gueye, M D; Gassama, O; Kane-Gueye, S M; Diouf, A A; Niang, M M; Diadhiou, M; Diallo, M; Dieng, Y D; Ndiaye, O; Diouf, A; Moreau, J C
2017-06-01
To describe a new training approach for emergency obstetric and neonatal care (EmONC) introduced in Senegal to strengthen the skills of healthcare providers. The approach was based on skills training according to the so-called "humanist" method and on "lifesaving skills". Simulated practice took place in the classroom through 13 clinical stations summarizing the clinical skills needed for EmONC. Evaluation took place in all phases, and the results were recorded in a database to document the progress of each learner. This approach was used to train 432 providers in 10 months and to document the increase in each participants' technical achievements. The combination of training with the "learning by doing" model ensured that providers learned and mastered all EmONC skills and reduced the missed learning opportunities observed in former EmONC training sessions. Assessing the impact of training on EmONC indicators and introducing this learning modality in basic training are the two major challenges we currently face.
Piloted Simulator Evaluation Results of Flight Physics Based Stall Recovery Guidance
Lombaerts, Thomas; Schuet, Stefan; Stepanyan, Vahram; Kaneshige, John; Hardy, Gordon; Shish, Kimberlee; Robinson, Peter
2018-01-01
In recent studies, it has been observed that loss of control in flight is the most frequent primary cause of accidents. A significant share of accidents in this category can be remedied by upset prevention if possible, and by upset recovery if necessary, in this order of priorities. One of the most important upsets to be recovered from is stall. Recent accidents have shown that a correct stall recovery maneuver remains a big challenge in civil aviation, partly due to a lack of pilot training. A possible strategy to support the flight crew in this demanding context is calculating a recovery guidance signal, and showing this signal in an intuitive way on one of the cockpit displays, for example by means of the flight director. Different methods for calculating the recovery signal, one based on fast model predictive control and another using an energy based approach, have been evaluated in four relevant operational scenarios by experienced commercial as well as test pilots in the Vertical Motion Simulator at NASA Ames Research Center. Evaluation results show that this approach could be able to assist the pilots in executing a correct stall recovery maneuver.
Directory of Open Access Journals (Sweden)
M. Palmroth
2006-05-01
Full Text Available We compare the ionospheric electron precipitation morphology and power from a global MHD simulation (GUMICS-4 with direct measurements of auroral energy flux during a pair of substorms on 28-29 March 1998. The electron precipitation power is computed directly from global images of auroral light observed by the Polar satellite ultraviolet imager (UVI. Independent of the Polar UVI measurements, the electron precipitation energy is determined from SNOE satellite observations on the thermospheric nitric oxide (NO density. We find that the GUMICS-4 simulation reproduces the spatial variation of the global aurora rather reliably in the sense that the onset of the substorm is shown in GUMICS-4 simulation as enhanced precipitation in the right location at the right time. The total integrated precipitation power in the GUMICS-4 simulation is in quantitative agreement with the observations during quiet times, i.e., before the two substorm intensifications. We find that during active times the GUMICS-4 integrated precipitation is a factor of 5 lower than the observations indicate. However, we also find factor of 2-3 differences in the precipitation power among the three different UVI processing methods tested here. The findings of this paper are used to complete an earlier objective, in which the total ionospheric power deposition in the simulation is forecasted from a mathematical expression, which is a function of solar wind density, velocity and magnetic field. We find that during this event, the correlation coefficient between the outcome of the forecasting expression and the simulation results is 0.83. During the event, the simulation result on the total ionospheric power deposition agrees with observations (correlation coefficient 0.8 and the AE index (0.85.
Comparing the results of lattice and off-lattice simulations for the melt of nonconcatenated rings
International Nuclear Information System (INIS)
Halverson, Jonathan D; Kremer, Kurt; Grosberg, Alexander Y
2013-01-01
To study the conformational properties of unknotted and nonconcatenated ring polymers in the melt, we present a detailed qualitative and quantitative comparison of simulation data obtained by molecular dynamics simulation using an off-lattice bead-spring model and by Monte Carlo simulation using a lattice model. We observe excellent, and sometimes even unexpectedly good, agreement between the off-lattice and lattice results for many quantities measured including the gyration radii of the ring polymers, gyration radii of their subchains, contact probabilities, surface characteristics, number of contacts between subchains, and the static structure factors of the rings and their subchains. These results are, in part, put in contrast to Moore curves, and the open, linear polymer counterparts. While our analysis is extensive, our understanding of the ring melt conformations is still rather preliminary. (paper)
Results and current trends of nuclear methods used in agriculture
International Nuclear Information System (INIS)
Horacek, P.
1983-01-01
The significance is evaluated of nuclear methods for agricultural research. The number of breeds induced by radiation mutations is increasing. The main importance of radiation mutation breeding consists in obtaining sources of the desired genetic properties for further hybridization. Radiostimulation is conducted with the aim of increasing yields. The irradiation of foods has not substantially increased worldwide. Very important is the irradiation of excrements and sludges which after such inactivation of pathogenic microorganisms may be used as humus-forming manure or as feed additives. In some countries the method is successfully being used of sexual sterilization for eradication of insect pests. The application of labelled compounds in the nutrition, physiology and protection of plants, farm animals and in food hygiene makes it possible to acquire new and accurate knowledge very quickly. Radioimmunoassay is a highly promising method in this respect. Labelling compounds with the stable 15 N isotope is used for the research of nitrogen metabolism. (M.D.)
[Numerical simulation of the effect of virtual stent release pose on the expansion results].
Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike
2018-04-01
The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.
[3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].
Kneist, W; Huber, T; Paschold, M; Lang, H
2016-06-01
The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.
Measuring cognitive load: mixed results from a handover simulation for medical students.
Young, John Q; Irby, David M; Barilla-LaBarca, Maria-Louise; Ten Cate, Olle; O'Sullivan, Patricia S
2016-02-01
The application of cognitive load theory to workplace-based activities such as patient handovers is hindered by the absence of a measure of the different load types. This exploratory study tests a method for measuring cognitive load during handovers. The authors developed the Cognitive Load Inventory for Handoffs (CLI4H) with items for intrinsic, extraneous, and germane load. Medical students completed the measure after participating in a simulated handover. Exploratory factor and correlation analyses were performed to collect evidence for validity. Results yielded a two-factor solution for intrinsic and germane load that explained 50 % of the variance. The extraneous load items performed poorly and were removed from the model. The score for intrinsic load correlated with the Paas Cognitive Load scale (r = 0.31, p = 0.004) and was lower for students with more prior handover training (p = 0.036). Intrinsic load did not, however, correlate with performance. Germane load did not correlate with the Paas Cognitive Load scale but did correlate as expected with performance (r = 0.30, p = 0.005) and was lower for those students with more prior handover training (p = 0.03). The CLI4H yielded mixed results with some evidence for validity of the score from the intrinsic load items. The extraneous load items performed poorly and the use of only a single item for germane load limits conclusions. The instrument requires further development and testing. Study results and limitations provide guidance to future efforts to measure cognitive load during workplace-based activities, such as handovers.