Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies
Klopf, M.; Pietsch, S. A.; Hasenauer, H.
2009-04-01
The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts
Comparing Numerical Spall Simulations with a Nonlinear Spall Formation Model
Ong, L.; Melosh, H. J.
2012-12-01
Spallation accelerates lightly shocked ejecta fragments to speeds that can exceed the escape velocity of the parent body. We present high-resolution simulations of nonlinear shock interactions in the near surface. Initial results show the acceleration of near-surface material to velocities up to 1.8 times greater than the peak particle velocity in the detached shock, while experiencing little to no shock pressure. These simulations suggest a possible nonlinear spallation mechanism to produce the high-velocity, low show pressure meteorites from other planets. Here we pre-sent the numerical simulations that test the production of spall through nonlinear shock interactions in the near sur-face, and compare the results with a model proposed by Kamegai (1986 Lawrence Livermore National Laboratory Report). We simulate near-surface shock interactions using the SALES_2 hydrocode and the Murnaghan equation of state. We model the shock interactions in two geometries: rectangular and spherical. In the rectangular case, we model a planar shock approaching the surface at a constant angle phi. In the spherical case, the shock originates at a point below the surface of the domain and radiates spherically from that point. The angle of the shock front with the surface is dependent on the radial distance of the surface point from the shock origin. We model the target as a solid with a nonlinear Murnaghan equation of state. This idealized equation of state supports nonlinear shocks but is tem-perature independent. We track the maximum pressure and maximum velocity attained in every cell in our simula-tions and compare them to the Hugoniot equations that describe the material conditions in front of and behind the shock. Our simulations demonstrate that nonlinear shock interactions in the near surface produce lightly shocked high-velocity material for both planar and cylindrical shocks. The spall is the result of the free surface boundary condi-tion, which forces a pressure gradient
Comparing wall modeled LES and prescribed boundary layer approach in infinite wind farm simulations
DEFF Research Database (Denmark)
Sarlak, Hamid; Mikkelsen, Robert; Sørensen, Jens Nørkær
2015-01-01
be imposed to study the wake and dynamics of vortices. The methodology is used for simulation of interactions of an infinitely long wind farm with the neutral ABL. Flow statistics are compared with the WMLES computations in terms of mean velocity as well as higher order statistical moments. The results......This paper aims at presenting a simple and computationally fast method for simulation of the Atmospheric Boundary Layer (ABL) and comparing the results with the commonly used wall-modelled Large Eddy Simulation (WMLES). The simple method, called Prescribed Mean Shear and Turbulence (PMST) hereafter......, is based on imposing body forces over the whole domain to maintain a desired unsteady ow, where the ground is modeled as a slip-free boundary which in return hampers the need for grid refinement and/or wall modeling close to the solid walls. Another strength of this method besides being computationally...
Directory of Open Access Journals (Sweden)
Christopher W. Walmsley
2013-11-01
Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different
Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine
Energy Technology Data Exchange (ETDEWEB)
Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)
2012-07-01
An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)
Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.
2015-01-01
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351
Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A
2015-04-21
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.
Comparative study of wall-force models for the simulation of bubbly flows
Energy Technology Data Exchange (ETDEWEB)
Rzehak, Roland, E-mail: r.rzehak@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Krepper, Eckhard, E-mail: E.Krepper@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Lifante, Conxita, E-mail: Conxita.Lifante@ansys.com [ANSYS Germany GmbH, Staudenfeldweg 12, 83624 Otterfing (Germany)
2012-12-15
Highlights: Black-Right-Pointing-Pointer Comparison of common models for the wall force with an experimental database. Black-Right-Pointing-Pointer Identification of suitable closure for bubbly flow. Black-Right-Pointing-Pointer Enables prediction of location and height of wall peak in void fraction profiles. - Abstract: Accurate numerical prediction of void-fraction profiles in bubbly multiphase-flow relies on suitable closure models for the momentum exchange between liquid and gas phases. We here consider forces acting on the bubbles in the vicinity of a wall. A number of different models for this so-called wall-force have been proposed in the literature and are implemented in widely used CFD-codes. Simulations using a selection of these models are compared with a set of experimental data on bubbly air-water flow in round pipes of different diameter. Based on the results, recommendations on suitable closures are given.
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Energy Technology Data Exchange (ETDEWEB)
Koepferl, Christine M.; Robitaille, Thomas P., E-mail: koepferl@usm.lmu.de [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany)
2017-11-01
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory . Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.
Koepferl, Christine M.; Robitaille, Thomas P.
2017-11-01
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies
Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.
2009-04-01
The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible un...
Salo , Tapio J.; Palosuo , Taru; Kersebaum , Kurt Christian; Nendel , Claas; Angulo , Carlos; Ewert , Frank; Bindi , Marco; Calanca , Pierluigi; Klein , Tommy; Moriondo , Marco; Ferrise , Roberto; Olesen , Jørgen Eivind; Patil , Rasmi H.; Ruget , Francoise; Takac , Jozef
2016-01-01
Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen, Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included si...
Directory of Open Access Journals (Sweden)
T. V. O. Fabson
2011-11-01
Full Text Available Bullwhip (or whiplash effect is an observed phenomenon in forecast driven distribution channeland careful management of these effects is of great importance to managers of supply chain.Bullwhip effect refers to situations where orders to the suppliers tend to have larger variance thansales to the buyer (demand distortion and the distortion increases as we move up the supply chain.Due to the fact that demand of customer for product is unstable, business managers must forecast inorder to properly position inventory and other resources. Forecasts are statistically based and in mostcases, are not very accurate. The existence of forecast errors made it necessary for organizations tooften carry an inventory buffer called “safety stock”. Moving up the supply chain from the end userscustomers to raw materials supplier there is a lot of variation in demand that can be observed, whichcall for greater need for safety stock.This study compares the efficacy of simulation and Time Series model in quantifying the bullwhipeffects in supply chain management.
Comparing Traditional versus Alternative Sequencing of Instruction When Using Simulation Modeling
Bowen, Bradley; DeLuca, William
2015-01-01
Many engineering and technology education classrooms incorporate simulation modeling as part of curricula to teach engineering and STEM-based concepts. The traditional method of the learning process has students first learn the content from the classroom teacher and then may have the opportunity to apply the learned content through simulation…
Five Blind Men and an Elephant: Comparing Aura Ozone Datasets and Sonde with Model Simulations
Tang, Q.; Prather, M. J.
2011-12-01
The four Earth Observing System (EOS) Aura satellite ozone measurements (HIRDLS, MLS, OMI, and TES) as well as the coincident WOUDC sonde are the five ``blind men'' touching the ``elephant'' (ozone). They all measure ozone (O3) in the upper troposphere and lower stratosphere (UT/LS) region, providing the great opportunity to study how the tropospheric ozone is influenced by the stratospheric source, an important tropospheric ozone budget term with large uncertainties and discrepancies across different models and methods. Based upon the 2-D autocorrelation for the tropospheric column ozone anomalies of the OMI swaths, we show that the stratosphere-troposphere exchange (STE) processes occur on the scale of a few hundred kilometers. Applying the high resolution (1o±1o±40-layer±0.5 hr) atmospheric chemistry transport model (CTM) as a transfer standard, we compare the noncoincident Aura level 2 swath datasets with the exact matching simulations of each measurement to investigate the consistency of different instruments as well as evaluate the accuracy of modeled ozone. Different signs of the CTM biases against HIRDLS, MLS, and TES are found from tropics to northern hemisphere (NH) mid-latitudes in July 2005 at 215 hPa and over tropics at 147 hPa for July 2005 and January 2006, suggesting inconsistency across these Aura datasets. On the other hand, the CTM has great positive biases against satellite observations in the lower stratosphere of winter time southern hemisphere (SH) mid-latitudes, which is probably attributed to the problems in the stratospheric circulation of the driving met-fields. The model's ability of reproducing STE-related processes, such as tropospheric folds (TFs), is confirmed by the comparisons with WOUDC sonde. We found eight cases in year 2005 with all the four Aura measurements available and folding structures in the coincident sonde profile. The case studies indicate that all the four Aura instruments demonstrate some skills in catching the
DEFF Research Database (Denmark)
Salo, T J; Palosuo, T; Kersebaum, K C
2016-01-01
Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen......, Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included six N rates ranging from 0 to 150 kg N/ha. Calibration data consisted of weather, soil, phenology, leaf area...... ranged from 170 to 870 kg/ha. During the test year 2009, most models failed to accurately reproduce the observed low yield without N fertilizer as well as the steep yield response to N applications. The multi-model predictions were closer to observations than most single-model predictions, but multi...
Maere, Thomas; Verrecht, Bart; Moerenhout, Stefanie; Judd, Simon; Nopens, Ingmar
2011-03-01
A benchmark simulation model for membrane bioreactors (BSM-MBR) was developed to evaluate operational and control strategies in terms of effluent quality and operational costs. The configuration of the existing BSM1 for conventional wastewater treatment plants was adapted using reactor volumes, pumped sludge flows and membrane filtration for the water-sludge separation. The BSM1 performance criteria were extended for an MBR taking into account additional pumping requirements for permeate production and aeration requirements for membrane fouling prevention. To incorporate the effects of elevated sludge concentrations on aeration efficiency and costs a dedicated aeration model was adopted. Steady-state and dynamic simulations revealed BSM-MBR, as expected, to out-perform BSM1 for effluent quality, mainly due to complete retention of solids and improved ammonium removal from extensive aeration combined with higher biomass levels. However, this was at the expense of significantly higher operational costs. A comparison with three large-scale MBRs showed BSM-MBR energy costs to be realistic. The membrane aeration costs for the open loop simulations were rather high, attributed to non-optimization of BSM-MBR. As proof of concept two closed loop simulations were run to demonstrate the usefulness of BSM-MBR for identifying control strategies to lower operational costs without compromising effluent quality. Copyright © 2011 Elsevier Ltd. All rights reserved.
[Compared Markov with fractal models by using single-channel experimental and simulation data].
Lan, Tonghan; Wu, Hongxiu; Lin, Jiarui
2006-10-01
The gating mechanical kinetical of ion channels has been modeled as a Markov process. In these models it is assumed that the channel protein has a small number of discrete conformational states and kinetic rate constants connecting these states are constant, the transition rate constants among the states is independent both of time and of the previous channel activity. It is assumed in Liebovitch's fractal model that the channel exists in an infinite number of energy states, consequently, transitions from one conductance state to another would be governed by a continuum of rate constants. In this paper, a statistical comparison is presented of Markov and fractal models of ion channel gating, the analysis is based on single-channel data from ion channel voltage-dependence K+ single channel of neuron cell and simulation data from three-states Markov model.
One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.
Lhomme, J; Bouvier, C; Mignot, E; Paquier, A
2006-01-01
A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.
Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
José Lamartine Távora Junior
2006-12-01
Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.
Energy Technology Data Exchange (ETDEWEB)
Merci, Bart [Department of Flow, Heat and Combustion Mechanics, Ghent University-UGent, Ghent (Belgium); Roekaerts, Dirk [Department of Multi-Scale Physics, Delft University of Technology, Delft (Netherlands); Naud, Bertrand [CIEMAT, Madrid (Spain); Pope, Stephen B. [Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY (United States)
2006-07-15
Numerical simulation results are presented for turbulent jet diffusion flames with various levels of turbulence-chemistry interaction, stabilized behind a bluff body (Sydney Flames HM1-3). Interaction between turbulence and combustion is modeled with the transported joint-scalar PDF approach. The mass density function transport equation is solved in a Lagrangian manner. A second-moment-closure turbulence model is applied to obtain accurate mean flow and turbulent mixing fields. The behavior of two micromixing models is discussed: the Euclidean minimum spanning tree model and the modified Curl coalescence dispersion model. The impact of the micromixing model choice on the results in physical space is small, although some influence becomes visible as the amount of local extinction increases. Scatter plots and profiles of conditional means and variances of thermochemical quantities, conditioned on the mixture fraction, are discussed both within and downstream of the recirculation region. A distinction is made between local extinction and incomplete combustion, based on the CO species mass fraction. The differences in qualitative behavior between the micromixing models are explained and quantitative comparison to experimental data is made. (author)
Heinsch, Stephen C; Das, Siba R; Smanski, Michael J
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems.
International Nuclear Information System (INIS)
Chason, E; Chan, W L
2009-01-01
Kinetic Monte Carlo simulations model the evolution of surfaces during low energy ion bombardment using atomic level mechanisms of defect formation, recombination and surface diffusion. Because the individual kinetic processes are completely determined, the resulting morphological evolution can be directly compared with continuum models based on the same mechanisms. We present results of simulations based on a curvature-dependent sputtering mechanism and diffusion of mobile surface defects. The results are compared with a continuum linear instability model based on the same physical processes. The model predictions are found to be in good agreement with the simulations for predicting the early-stage morphological evolution and the dependence on processing parameters such as the flux and temperature. This confirms that the continuum model provides a reasonable approximation of the surface evolution from multiple interacting surface defects using this model of sputtering. However, comparison with experiments indicates that there are many features of the surface evolution that do not agree with the continuum model or simulations, suggesting that additional mechanisms are required to explain the observed behavior.
Burov, S V; Shchekin, A K
2010-12-28
General thermodynamic relations for the work of polydisperse micelle formation in the model of ideal solution of molecular aggregates in nonionic surfactant solution and the model of "dressed micelles" in ionic solution have been considered. In particular, the dependence of the aggregation work on the total concentration of nonionic surfactant has been analyzed. The analogous dependence for the work of formation of ionic aggregates has been examined with regard to existence of two variables of a state of an ionic aggregate, the aggregation numbers of surface active ions and counterions. To verify the thermodynamic models, the molecular dynamics simulations of micellization in nonionic and ionic surfactant solutions at two total surfactant concentrations have been performed. It was shown that for nonionic surfactants, even at relatively high total surfactant concentrations, the shape and behavior of the work of polydisperse micelle formation found within the model of the ideal solution at different total surfactant concentrations agrees fairly well with the numerical experiment. For ionic surfactant solutions, the numerical results indicate a strong screening of ionic aggregates by the bound counterions. This fact as well as independence of the coefficient in the law of mass action for ionic aggregates on total surfactant concentration and predictable behavior of the "waterfall" lines of surfaces of the aggregation work upholds the model of "dressed" ionic aggregates.
Ybanez, R. L.; Lagmay, A. M. A.; David, C. P.
2016-12-01
With climatological hazards increasing globally, the Philippines is listed as one of the most vulnerable countries in the world due to its location in the Western Pacific. Flood hazards mapping and modelling is one of the responses by local government and research institutions to help prepare for and mitigate the effects of flood hazards that constantly threaten towns and cities in floodplains during the 6-month rainy season. Available digital elevation maps, which serve as the most important dataset used in 2D flood modelling, are limited in the Philippines and testing is needed to determine which of the few would work best for flood hazards mapping and modelling. Two-dimensional GIS-based flood modelling with the flood-routing software FLO-2D was conducted using three different available DEMs from the ASTER GDEM, the SRTM GDEM, and the locally available IfSAR DTM. All other parameters kept uniform, such as resolution, soil parameters, rainfall amount, and surface roughness, the three models were run over a 129-sq. kilometer watershed with only the basemap varying. The output flood hazard maps were compared on the basis of their flood distribution, extent, and depth. The ASTER and SRTM GDEMs contained too much error and noise which manifested as dissipated and dissolved hazard areas in the lower watershed where clearly delineated flood hazards should be present. Noise on the two datasets are clearly visible as erratic mounds in the floodplain. The dataset which produced the only feasible flood hazard map is the IfSAR DTM which delineates flood hazard areas clearly and properly. Despite the use of ASTER and SRTM with their published resolution and accuracy, their use in GIS-based flood modelling would be unreliable. Although not as accessible, only IfSAR or better datasets should be used for creating secondary products from these base DEM datasets. For developing countries which are most prone to hazards, but with limited choices for basemaps used in hazards
Czech Academy of Sciences Publication Activity Database
Salo, T.; Palosuo, T.; Kersebaum, K. C.; Nendel, C.; Angulo, C.; Ewert, F.; Bindi, M.; Calanca, P.; Klein, T.; Moriondo, M.; Ferrise, R.; Olesen, J. E.; Patil, R. H.; Ruget, F.; Takáč, J.; Hlavinka, Petr; Trnka, Miroslav; Rötter, R. P.
2016-01-01
Roč. 154, č. 7 (2016), s. 1218-1240 ISSN 0021-8596 R&D Projects: GA MŠk(CZ) LO1415; GA MZe QJ1310123; GA MŠk(CZ) LD13030 EU Projects: European Commission(XE) 268277; European Commission(XE) 292944 Institutional support: RVO:67179843 Keywords : Northern growing conditions * climate change impacts * spring barley * system simulations * soil properties * winter-wheat * dynamics * growth Subject RIV: GC - Agronomy Impact factor: 1.291, year: 2016
Comparative investigation of micro-flaw models for the simulation of brittle fracture in rock
CSIR Research Space (South Africa)
Sellers, E
1997-07-01
Full Text Available can be covered by a set of Voronoi polygons or Delaunay tri- angles (Napier and Peirce 1995). A subset of the edges of these polygons is selected and designated as pre-existing ?aws with assigned strength an friction sliding properties. A speci?ed load... of incre- mental displacements were applied to the surface of a rectangular block to simulate compression tests have been performed to study the fracture mechanisms induced in random Voronoi and Delaunay tessellation patterns (Napier and Peirce 1995; Napier...
International Nuclear Information System (INIS)
Kolev, B.
2006-01-01
Four in situ methods for potential and actual evapotranspiration determining were compared: neutron gauge, tensiometers, gypsum blocks and lysimeters. The actual and potential evapotranspiration were calculated by water balance equation and by using a simulation model for their determination. The aim of this study was mainly pointed on calculations of water use efficiency and transpiration coefficient in potential production situation. This makes possible to choose the best way for water consumption optimization for a given crop. The final results find with the best of the methods could be used for applying the principles of sustainable agricultural production in random object of Bulgarian agricultural area
Rossen, E.T.R.
2012-01-01
This thesis has been dedicated to modeling the electron transport in tunnel junctions in order to efficiently describe and predict inelastic effects that occur when electrons pass a tunnel junction. These inelastic effects can be considered at several levels of sophistication, from very simple to
Gautestad, Arild O; Loe, Leif E; Mysterud, Atle
2013-05-01
1. Increased inference regarding underlying behavioural mechanisms of animal movement can be achieved by comparing GPS data with statistical mechanical movement models such as random walk and Lévy walk with known underlying behaviour and statistical properties. 2. GPS data are typically collected with ≥ 1 h intervals not exactly tracking every mechanistic step along the movement path, so a statistical mechanical model approach rather than a mechanistic approach is appropriate. However, comparisons require a coherent framework involving both scaling and memory aspects of the underlying process. Thus, simulation models have recently been extended to include memory-guided returns to previously visited patches, that is, site fidelity. 3. We define four main classes of movement, differing in incorporation of memory and scaling (based on respective intervals of the statistical fractal dimension D and presence/absence of site fidelity). Using three statistical protocols to estimate D and site fidelity, we compare these main movement classes with patterns observed in GPS data from 52 females of red deer (Cervus elaphus). 4. The results show best compliance with a scale-free and memory-enhanced kind of space use; that is, a power law distribution of step lengths, a fractal distribution of the spatial scatter of fixes and site fidelity. 5. Our study thus demonstrates how inference regarding memory effects and a hierarchical pattern of space use can be derived from analysis of GPS data. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
Zhang, K.; Ghobadian, A.; Nouri, J. M.
2017-01-01
A comparative study of two combustion models based on non-premixed assumption and partially premixed assumptions using the overall models of Zimont Turbulent Flame Speed Closure Method (ZTFSC) and Extended Coherent Flamelet Method (ECFM) are conducted through Reynolds stress turbulence modelling of Tay model gas turbine combustor for the first time. The Tay model combustor retains all essential features of a realistic gas turbine combustor. It is seen that the non-premixed combustion model fa...
DEFF Research Database (Denmark)
Nielsen, Steen
2000-01-01
This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....
Belle, E M S; Benazzo, A; Ghirotto, S; Colonna, V; Barbujani, G
2009-03-01
Populations of anatomically archaic (Neandertal) and early modern (Cro-Magnoid) humans are jointly documented in the European fossil record, in the period between 40 000 and 25 000 years BP, but the large differences between their cultures, morphologies and DNAs suggest that the two groups were not close relatives. However, it is still unclear whether any genealogical continuity between them can be ruled out. Here, we simulated a broad range of demographic scenarios by means of a serial coalescence algorithm in which Neandertals, Cro-Magnoids and modern Europeans were either part of the same mitochondrial genealogy or of two separate genealogies. Mutation rates, population sizes, population structure and demographic growth rates varied across simulations. All models in which anatomically modern (that is, Cro-Magnoid and current) Europeans belong to a distinct genealogy performed better than any model in which the three groups were assigned to the same mitochondrial genealogy. The maximum admissible level of gene flow between Neandertals and the ancestors of current Europeans is 0.001% per generation, one order of magnitude lower than estimated in previous studies not considering genetic data on Cro-Magnoid people.
Energy Technology Data Exchange (ETDEWEB)
Osborn, Timothy J.; Briffa, Keith R. [University of East Anglia, Climatic Research Unit, School of Environmental Sciences, Norwich (United Kingdom); Raper, Sarah C.B. [University of East Anglia, Climatic Research Unit, School of Environmental Sciences, Norwich (United Kingdom); Manchester Metropolitan University, Dalton Research Institute, Manchester (United Kingdom)
2006-08-15
An intercomparison of eight climate simulations, each driven with estimated natural and anthropogenic forcings for the last millennium, indicates that the so-called ''Erik'' simulation of the ECHO-G coupled ocean-atmosphere climate model exhibits atypical behaviour. The ECHO-G simulation has a much stronger cooling trend from 1000 to 1700 and a higher rate of warming since 1800 than the other simulations, with the result that the overall amplitude of millennial-scale temperature variations in the ECHO-G simulation is much greater than in the other models. The MAGICC (Model for the Assessment of Greenhouse-gas-Induced Climate Change) simple climate model is used to investigate possible causes of this atypical behaviour. It is shown that disequilibrium in the initial conditions probably contributes spuriously to the cooling trend in the early centuries of the simulation, and that the omission of tropospheric sulphate aerosol forcing is the likely explanation for the anomalously large recent warming. The simple climate model results are used to adjust the ECHO-G Erik simulation to mitigate these effects, which brings the simulation into better agreement with the other seven models considered here and greatly reduces the overall range of temperature variations during the last millennium simulated by ECHO-G. Smaller inter-model differences remain which can probably be explained by a combination of the particular forcing histories and model sensitivities of each experiment. These have not been investigated here, though we have diagnosed the effective climate sensitivity of ECHO-G to be 2.39{+-}0.11 K for a doubling of CO{sub 2}. (orig.)
International Nuclear Information System (INIS)
Liu Huigen; Zhou Jilin; Wang Su
2011-01-01
During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
Nevison, C. D.; Saikawa, E.; Dlugokencky, E. J.; Andrews, A. E.; Sweeney, C.
2014-12-01
Atmospheric N2O concentrations have increased from 275 ppb in the preindustrial to about 325 ppb in recent years, a ~20% increase with important implications for both anthropogenic greenhouse forcing and stratospheric ozone recovery. This increase has been driven largely by synthetic fertilizer production and other perturbations to the global nitrogen cycle associated with human agriculture. Several recent regional atmospheric inversion studies have quantified North American agricultural N2O emissions using top-down constraints based on atmospheric N2O data from the National Oceanic and Atmospheric Administration (NOAA) Global Greenhouse Gas Reference Network, including surface, aircraft and tall tower platforms. These studies have concluded that global N2O inventories such as EDGAR may be underestimating the true U.S. anthropogenic N2O source by a factor of 3 or more. However, simple back-of-the-envelope calculations show that emissions of this magnitude are difficult to reconcile with the basic constraints of the global N2O budget. Here, we explore some possible reasons why regional atmospheric inversions might overestimate the U.S. agricultural N2O source. First, the seasonality of N2O agricultural sources is not well known, but can have an important influence on inversion results, particularly when the inversions are based on data that are concentrated in the spring/summer growing season. Second, boundary conditions can strongly influence regional inversions but the boundary conditions used may not adequately account for remote influences on surface data such as the seasonal stratospheric influx of N2O-depleted air. We will present a set of forward model simulations, using the Community Land Model (CLM) and two atmospheric chemistry tracer transport models, MOZART and the Whole Atmosphere Community Climate Model (WACCM), that examine the influence of terrestrial emissions and atmospheric chemistry and dynamics on atmospheric variability in N2O at U.S. and
Rossetti, Manuel D
2015-01-01
Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als
Aviation Safety Simulation Model
Houser, Scott; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.
DEFF Research Database (Denmark)
Meier, H E Markus; Andersson, Helén C; Arheimer, Berit
2012-01-01
Multi-model ensemble simulations for the marine biogeochemistry and food web of the Baltic Sea were performed for the period 1850–2098, and projected changes in the future climate were compared with the past climate environment. For the past period 1850–2006, atmospheric, hydrological and nutrient...... forcings were reconstructed, based on historical measurements. For the future period 1961–2098, scenario simulations were driven by regionalized global general circulation model (GCM) data and forced by various future greenhouse gas emission and air- and riverborne nutrient load scenarios (ranging from...... a pessimistic ‘business-as-usual’ to the most optimistic case). To estimate uncertainties, different models for the various parts of the Earth system were applied. Assuming the IPCC greenhouse gas emission scenarios A1B or A2, we found that water temperatures at the end of this century may be higher...
Simulation in Complex Modelling
DEFF Research Database (Denmark)
Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin
2017-01-01
This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....
Scientific Modeling and simulations
Diaz de la Rubia, Tomás
2009-01-01
Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments
Computer Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Pronskikh, V. S. [Fermilab
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
Automated Simulation Model Generation
Huang, Y.
2013-01-01
One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become
Projective Simulation compared to reinforcement learning
Bjerland, Øystein Førsund
2015-01-01
This thesis explores the model of projective simulation (PS), a novel approach for an artificial intelligence (AI) agent. The model of PS learns by interacting with the environment it is situated in, and allows for simulating actions before real action is taken. The action selection is based on a random walk through the episodic & compositional memory (ECM), which is a network of clips that represent previous experienced percepts. The network takes percepts as inpu...
Comparative Validation of Building Simulation Software
DEFF Research Database (Denmark)
Kalyanova, Olena; Heiselberg, Per
The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow r...... is that the comparative validation can be regarded as the main argument to continue the validation of the building simulation software for the buildings with the double skin façade with the empirical validation test cases.......The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow...
Multi-model ensemble simulations of troposheric NO2 compared with GOME retrievals for the year 2000
Noije, van T.P.C.; Eskes, H.J.; Dentener, F.J.; Stevenson, D.S.; Ellingsen, K.; Schultz, M.G.; Wild, O.; Amann, M.; Atherton, C.S.; Bergmann, D.; Bey, I.; Boersma, K.F.; Butler, T.; Cofala, J.; Drevet, J.; Fiore, A.M.; Gauss, M.; Hauglustaine, D.A.; Horowitz, L.W.; Isaksen, I.S.A.; Krol, M.C.; Lamarque, J.F.; Lawrence, M.G.; Martin, R.V.; Montanaro, V.; Muller, J.F.; Pitari, G.; Prather, M.J.; Pyle, J.A.; Richter, A.; Rodriguez, J.M.; Savage, N.H.; Strahan, S.E.; Sudo, K.; Szopa, S.; Roozendael, van M.
2006-01-01
We present a systematic comparison of tropospheric NO2 from 17 global atmospheric chemistry models with three state-of-the-art retrievals from the Global Ozone Monitoring Experiment (GOME) for the year 2000. The models used constant anthropogenic emissions from IIASA/EDGAR3.2 and monthly emissions
Denadai, Rafael; Saad-Hossne, Rogerio; Raposo-Amaral, Cassio Eduardo
2014-11-01
To assess if the bench model fidelity interferes in the acquisition of rhomboid flap skills by medical students. Sixty novice medical students were randomly assigned to 5 practice conditions with instructor-directed Limberg rhomboid flap skills training: didactic materials (control group 1), low-fidelity rubberized line (group 2) or ethylene-vinyl acetate (group 3) bench models; high-fidelity chicken leg skin (group 4) or pig foot skin (group 5) bench models. Pretests and posttests were applied, and Global Rating Scale, effect size, and self-perceived confidence were used to evaluate all flap performances. Medical students from groups 2 to 5 showed better flap performances based on the Global Rating Scale (all P 0.05). The magnitude of the effect was considered large (>0.80) in all measurements. There was acquisition of rhomboid flap skills regardless of bench model fidelity.
AEGIS geologic simulation model
International Nuclear Information System (INIS)
Foley, M.G.
1982-01-01
The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application
Directory of Open Access Journals (Sweden)
Dańko R.
2015-12-01
Full Text Available Experiments of filling the model moulds cavity of various inner shapes inserted in rectangular cavity of the casting die (dimensions: 280 mm (height × 190 mm (width × 10 mm (depth by applying model liquids of various density and viscosity are presented in the paper. Influence of die venting as well as inlet system area and inlet velocity on the volumetric rate of filling of the model liquid - achieved by means of filming the process in the system of a cold-chamber casting die was tested. Experiments compared with the results of simulation performed by means of the calculation module Novacast (Novaflow&Solid for the selected various casting conditions - are also presented in the paper.
Validation of simulation models
DEFF Research Database (Denmark)
Rehman, Muniza; Pedersen, Stig Andur
2012-01-01
In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...
Fialho, André S; Oliveira, Mónica D; Sá, Armando B
2011-10-15
Recent reforms in Portugal aimed at strengthening the role of the primary care system, in order to improve the quality of the health care system. Since 2006 new policies aiming to change the organization, incentive structures and funding of the primary health care sector were designed, promoting the evolution of traditional primary health care centres (PHCCs) into a new type of organizational unit--family health units (FHUs). This study aimed to compare performances of PHCC and FHU organizational models and to assess the potential gains from converting PHCCs into FHUs. Stochastic discrete event simulation models for the two types of organizational models were designed and implemented using Simul8 software. These models were applied to data from nineteen primary care units in three municipalities of the Greater Lisbon area. The conversion of PHCCs into FHUs seems to have the potential to generate substantial improvements in productivity and accessibility, while not having a significant impact on costs. This conversion might entail a 45% reduction in the average number of days required to obtain a medical appointment and a 7% and 9% increase in the average number of medical and nursing consultations, respectively. Reorganization of PHCC into FHUs might increase accessibility of patients to services and efficiency in the provision of primary care services.
Lurton, Thibaut; Jégou, Fabrice; Berthet, Gwenaël; Renard, Jean-Baptiste; Clarisse, Lieven; Schmidt, Anja; Brogniez, Colette; Roberts, Tjarda J.
2018-03-01
Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2), which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD). Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl) alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2-particle-HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1) Whole Atmosphere Community Climate Model (WACCM) - Community Aerosol and Radiation Model for Atmospheres (CARMA) sectional aerosol microphysics model (with no a priori assumption on particle size). The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS), enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI) satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx) compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions such as Sarychev on
Directory of Open Access Journals (Sweden)
T. Lurton
2018-03-01
Full Text Available Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2, which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD. Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2–particle–HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1 Whole Atmosphere Community Climate Model (WACCM – Community Aerosol and Radiation Model for Atmospheres (CARMA sectional aerosol microphysics model (with no a priori assumption on particle size. The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS, enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions
International Nuclear Information System (INIS)
Meier, H E Markus; Andersson, Helén C; Arheimer, Berit; Donnelly, Chantal; Eilola, Kari; Höglund, Anders; Kuznetsov, Ivan; Blenckner, Thorsten; Gustafsson, Bo G; Müller-Karulis, Bärbel; Niiranen, Susa; Chubarenko, Boris; Hansson, Anders; Havenhand, Jonathan; MacKenzie, Brian R; Neumann, Thomas; Piwowarczyk, Joanna; Raudsepp, Urmas; Reckermann, Marcus; Ruoho-Airola, Tuija
2012-01-01
Multi-model ensemble simulations for the marine biogeochemistry and food web of the Baltic Sea were performed for the period 1850–2098, and projected changes in the future climate were compared with the past climate environment. For the past period 1850–2006, atmospheric, hydrological and nutrient forcings were reconstructed, based on historical measurements. For the future period 1961–2098, scenario simulations were driven by regionalized global general circulation model (GCM) data and forced by various future greenhouse gas emission and air- and riverborne nutrient load scenarios (ranging from a pessimistic ‘business-as-usual’ to the most optimistic case). To estimate uncertainties, different models for the various parts of the Earth system were applied. Assuming the IPCC greenhouse gas emission scenarios A1B or A2, we found that water temperatures at the end of this century may be higher and salinities and oxygen concentrations may be lower than ever measured since 1850. There is also a tendency of increased eutrophication in the future, depending on the nutrient load scenario. Although cod biomass is mainly controlled by fishing mortality, climate change together with eutrophication may result in a biomass decline during the latter part of this century, even when combined with lower fishing pressure. Despite considerable shortcomings of state-of-the-art models, this study suggests that the future Baltic Sea ecosystem may unprecedentedly change compared to the past 150 yr. As stakeholders today pay only little attention to adaptation and mitigation strategies, more information is needed to raise public awareness of the possible impacts of climate change on marine ecosystems. (letter)
Comparative Toxicity of Simulated Smog Atmospheres in ...
Effects of complex regional multipollutant mixtures on disease expression in susceptible populations are dependent on multiple exposure and susceptibility factors. Differing profiles of ozone (O3), nitrogen dioxide (NO2), and particulate matter (PM), which are key components of smog, and other hazardous pollutants may develop as a result of regional-specific geographic and urban environment characteristics. We investigated the pulmonary effects of two smog mixtures with different compositions in a mouse model of allergic airway disease to determine which source profile had the greatest impact on pulmonary endpoints. A hydrocarbon mixture was combined with NO gas in the presence of UV light in a controlled setting. Simulated smog atmosphere 1 (SSA-1) consisted of concentrations of 1070 µg/m3 secondary organic aerosol (SOA), 0.104 ppm O3, and 0.252 ppm NO2. SSA-2 consisted of a starting concentration of 53 µg/m3 SOA, 0.376 ppm O3, and 0.617 ppm NO2. An increased aerosol concentration was noted in the exposure chamber. Healthy and house dust mite (HDM)-sensitized (allergic) female BALB/cJ mice were exposed 4 hr/day for 1 or 5 days to either smog mixture or clean air. Two days after HDM challenge, airway mechanics were tested in anesthetized ventilated mice. Following methacholine aerosol challenge, increased airway resistance and elastance and a decrease in lung compliance were consistently observed in air- and smog-exposed HDM-allergic groups compared with non-a
International Nuclear Information System (INIS)
Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.
1983-09-01
On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example
PSH Transient Simulation Modeling
Energy Technology Data Exchange (ETDEWEB)
Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-12-21
PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.
We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...
Schoenthaler, Martin; Schnell, Daniel; Wilhelm, Konrad; Schlager, Daniel; Adams, Fabian; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2016-04-01
To compare task performances of novices and experts using advanced high-definition 3D versus 2D optical systems in a surgical simulator model. Fifty medical students (novices in laparoscopy) were randomly assigned to perform five standardized tasks adopted from the Fundamentals of Laparoscopic Surgery (FLS) curriculum in either a 2D or 3D laparoscopy simulator system. In addition, eight experts performed the same tasks. Task performances were evaluated using a validated scoring system of the SAGES/FLS program. Participants were asked to rate 16 items in a questionnaire. Overall task performance of novices was significantly better using stereoscopic visualization. Superiority of performances in 3D reached a level of significance for tasks peg transfer and precision cutting. No significant differences were noted in performances of experts when using either 2D or 3D. Overall performances of experts compared to novices were better in both 2D and 3D. Scorings in the questionnaires showed a tendency toward lower scores in the group of novices using 3D. Stereoscopic imaging significantly improves performance of laparoscopic phantom tasks of novices. The current study confirms earlier data based on a large number of participants and a standardized task and scoring system. Participants felt more confident and comfortable when using a 3D laparoscopic system. However, the question remains open whether these findings translate into faster and safer operations in a clinical setting.
A Process for Comparing Dynamics of Distributed Space Systems Simulations
Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.
2009-01-01
The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.
Simulation - modeling - experiment
International Nuclear Information System (INIS)
2004-01-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
Energy Technology Data Exchange (ETDEWEB)
Larsen, G.C.; Aagaard Madsen, H.; Larsen, T.J.; Troldborg, N.
2008-07-15
We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture a stochastic model of the downstream wake meandering is formulated. In addition to the kinematic formulation of the dynamics of the 'meandering frame of reference', models characterizing the mean wake deficit as well as the added wake turbulence, described in the meandering frame of reference, are an integrated part the DWM model complex. For design applications, the computational efficiency of wake deficit prediction is a key issue. A computationally low cost model is developed for this purpose. Likewise, the character of the added wake turbulence, generated by the up-stream turbine in the form of shed and trailed vorticity, has been approached by a simple semi-empirical model essentially based on an eddy viscosity philosophy. Contrary to previous attempts to model wake loading, the DWM approach opens for a unifying description in the sense that turbine power- and load aspects can be treated simultaneously. This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated modeling tool, the DWM methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjaereborg wind farm, have
Weigel, Martin
2011-09-01
Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.
Biomolecular modelling and simulations
Karabencheva-Christova, Tatyana
2014-01-01
Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Directory of Open Access Journals (Sweden)
Velíšek Karol
2017-01-01
Full Text Available Over the last years, there has been an increasing tendency and pressure on the faster implementation robotic devices and systems in manufacturing. Such transition involves several disciplines starting with the prototyping of CAD models itself. The paper addresses the creation of CAD models and is mainly aimed at their integration in a given simulation environment according to the conception and guidelines of Industry 4.0, where the part itself becomes the entity carrying most of the needed information at any time of a production process. The creation of such CAD models is key for the further and better customization of simulations. In other to better exemplify all this, the paper describes the whole process of “virtual to real life implementation” of a given robotized workplace needed to be developed at the Institute. The design of such robotized workplace included the use of an ABB IRB 120 robot and several other devices which were all designed, simulated and analyzed in a virtual environment before the final development and implementation. This paper helped demonstrating the importance of having exactly the same model (real and virtual with respect to the success of the offline simulations.
Coats, S.; Smerdon, J. E.; Stevenson, S.; Fasullo, J.; Otto-Bliesner, B. L.
2017-12-01
The observational record, which provides only limited sampling of past climate variability, has made it difficult to quantitatively analyze the complex spatio-temporal character of drought. To provide a more complete characterization of drought, machine learning based methods that identify drought in three-dimensional space-time are applied to climate model simulations of the last millennium and future, as well as tree-ring based reconstructions of hydroclimate over the Northern Hemisphere extratropics. A focus is given to the most persistent and severe droughts of the past 1000 years. Analyzing reconstructions and simulations in this context allows for a validation of the spatio-temporal character of persistent and severe drought in climate model simulations. Furthermore, the long records provided by the reconstructions and simulations, allows for sufficient sampling to constrain projected changes to the spatio-temporal character of these features using the reconstructions. Along these lines, climate models suggest that there will be large increases in the persistence and severity of droughts over the coming century, but little change in their spatial extent. These models, however, exhibit biases in the spatio-temporal character of persistent and severe drought over parts of the Northern Hemisphere, which may undermine their usefulness for future projections. Despite these limitations, and in contrast to previous claims, there are no systematic changes in the character of persistent and severe droughts in simulations of the historical interval. This suggests that climate models are not systematically overestimating the hydroclimate response to anthropogenic forcing over this period, with critical implications for confidence in hydroclimate projections.
A simulation study comparing aberration detection algorithms for syndromic surveillance
Directory of Open Access Journals (Sweden)
Painter Ian
2007-03-01
Full Text Available Abstract Background The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. Methods We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. Results Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56% of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. Conclusion When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels.
Directory of Open Access Journals (Sweden)
A. S. Komarov
2012-11-01
Full Text Available An individual-based simulation model, EFIMOD, was used to simulate the response of forest ecosystems to climate change and additional nitrogen deposition. The general scheme of the model includes forest growth depending on nitrogen uptake by plants and mineralization of soil organic matter. The mineralization rate is dependent on nitrogen content in litter and forest floor horizons. Three large forest areas in European Central Russia with a total area of about 17 000 km^{2} in distinct environmental conditions were chosen. Simulations were carried out with two climatic scenarios (ambient climate and climate change and different levels of nitrogen deposition (ambient value and increase by 6 and 12 kg N ha^{−1} yr^{−1}. The simulations showed that increased nitrogen deposition leads to increased productivity of trees, increased organic matter content in organic soil horizons, and an increased portion of deciduous tree species. For the climate change scenario, the same effects on forest productivity and similar shifts in species composition were predicted but the accumulation of organic matter in soil was decreased.
Modeling and simulation of large HVDC systems
Energy Technology Data Exchange (ETDEWEB)
Jin, H.; Sood, V.K.
1993-01-01
This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
Operations planning simulation: Model study
1974-01-01
The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.
Notes on modeling and simulation
Energy Technology Data Exchange (ETDEWEB)
Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Whole-building Hygrothermal Simulation Model
DEFF Research Database (Denmark)
Rode, Carsten; Grau, Karl
2003-01-01
An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...
Simulation Model of a Transient
DEFF Research Database (Denmark)
Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte
2005-01-01
This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...
Simulated annealing model of acupuncture
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Cognitive models embedded in system simulation models
International Nuclear Information System (INIS)
Siegel, A.I.; Wolf, J.J.
1982-01-01
If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context
Whistler Observations on DEMETER Compared with Full Electromagnetic Wave Simulations
Compston, A. J.; Cohen, M.; Lehtinen, N. G.; Inan, U.; Linscott, I.; Said, R.; Parrot, M.
2014-12-01
Terrestrial Very Low Frequency (VLF) electromagnetic radiation, which strongly impacts the Van Allen radiation belt electron dynamics, is injected across the ionosphere into the Earth's plasmasphere from two primary sources: man-made VLF transmitters and lightning discharges. Numerical models of trans-ionospheric propagation of such waves remain unvalidated, and early models may have overestimated the absorption, hindering a comprehensive understanding of the global impact of VLF waves in the loss of radiation belt electrons. In an attempt to remedy the problem of a lack of accurate trans-ionospheric propagation models, we have used a full electromagnetic wave method (FWM) numerical code to simulate the propagation of lightning-generated whistlers into the magnetosphere and compared the results with whistlers observed on the DEMETER satellite and paired with lightning stroke data from the National Lightning Detection Network (NLDN). We have identified over 20,000 whistlers occuring in 14 different passes of DEMETER over the central United States during the summer of 2009, and 14,000 of those occured within the 2000 km x 2000 km simulation grid we used. As shown in the attached figure, which shows a histogram of the ratio of the simulated whistler energy to the measured whistler energy for the 14,000 whistlers we compared, the simulation tends to slightly underestimate the total whistler energy injected by about 5 dB. However, the simulation underestimates the DEMETER measurements more as one gets further from the source lightning stroke, so since the signal to noise ratio of more distant whistlers will be smaller, possibly additive noise in the DEMETER measurements (which of course is not accounted for in the model) may explain some of the observed discrepancy.
A model management system for combat simulation
Dolk, Daniel R.
1986-01-01
The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...
General introduction to simulation models
DEFF Research Database (Denmark)
Hisham Beshara Halasa, Tariq; Boklund, Anette
2012-01-01
trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...
Comparing simulation of plasma turbulence with experiment
International Nuclear Information System (INIS)
Ross, David W.; Bravenec, Ronald V.; Dorland, William; Beer, Michael A.; Hammett, G. W.; McKee, George R.; Fonck, Raymond J.; Murakami, Masanori; Burrell, Keith H.; Jackson, Gary L.; Staebler, Gary M.
2002-01-01
The direct quantitative correspondence between theoretical predictions and the measured plasma fluctuations and transport is tested by performing nonlinear gyro-Landau-fluid simulations with the GRYFFIN (or ITG) code [W. Dorland and G. W. Hammett, Phys. Fluids B 5, 812 (1993); M. A. Beer and G. W. Hammett, Phys. Plasmas 3, 4046 (1996)]. In an L-mode reference discharge in the DIII-D tokamak [J. L. Luxon and L. G. Davis, Fusion Technol. 8, 441 (1985)], which has relatively large fluctuations and transport, the turbulence is dominated by ion temperature gradient (ITG) modes. Trapped electron modes and impurity drift waves also play a role. Density fluctuations are measured by beam emission spectroscopy [R. J. Fonck, P. A. Duperrex, and S. F. Paul, Rev. Sci. Instrum. 61, 3487 (1990)]. Experimental fluxes and corresponding diffusivities are analyzed by the TRANSP code [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, edited by B. Coppi, G. G. Leotta, D. Pfirsch, R. Pozzoli, and E. Sindoni (Pergamon, Oxford, 1980), Vol. 1, p. 19]. The shape of the simulated wave number spectrum is close to the measured one. The simulated ion thermal transport, corrected for ExB low shear, exceeds the experimental value by a factor of 1.5 to 2.0. The simulation overestimates the density fluctuation level by an even larger factor. On the other hand, the simulation underestimates the electron thermal transport, which may be accounted for by modes that are not accessible to the simulation or to the BES measurement
Simulation - modeling - experiment; Simulation - modelisation - experience
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-07-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
Comparing linear probability model coefficients across groups
DEFF Research Database (Denmark)
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Boda, Christian-Nils; Dozza, Marco; Bohman, Katarina; Thalya, Prateek; Larsson, Annika; Lubbe, Nils
2018-02-01
Bicyclist fatalities are a great concern in the European Union. Most of them are due to crashes between motorized vehicles and bicyclists at unsignalised intersections. Different countermeasures are currently being developed and implemented in order to save lives. One type of countermeasure, active safety systems, requires a deep understanding of driver behaviour to be effective without being annoying. The current study provides new knowledge about driver behaviour which can inform assessment programmes for active safety systems such as Euro NCAP. This study investigated how drivers responded to bicyclists crossing their path at an intersection. The influences of car speed and cyclist speed on the driver response process were assessed for three different crossing configurations. The same experimental protocol was tested in a fixed-base driving simulator and on a test track. A virtual model of the test track was used in the driving simulator to keep the protocol as consistent as possible across testing environments. Results show that neither car speed nor bicycle speed directly influenced the response process. The crossing configuration did not directly influence the braking response process either, but it did influence the strategy chosen by the drivers to approach the intersection. The point in time when the bicycle became visible (which depended on the car speed, the bicycle speed, and the crossing configuration) and the crossing configuration alone had the largest effects on the driver response process. Dissimilarities between test-track and driving-simulator studies were found; however, there were also interesting similarities, especially in relation to the driver braking behaviour. Drivers followed the same strategy to initiate braking, independent of the test environment. On the other hand, the test environment affected participants' strategies for releasing the gas pedal and regulating deceleration. Finally, a mathematical model, based on both experiments
Directory of Open Access Journals (Sweden)
Amanda Swearingen
2015-07-01
Full Text Available Comparisons of the potential outcomes of multiple land management strategies and an understanding of the influence of potential increases in climate-related disturbances on these outcomes are essential for long term land management and conservation planning. To provide these insights, we developed an approach that uses collaborative scenario development and state-and-transition simulation modeling to provide land managers and conservation practitioners with a comparison of potential landscapes resulting from alternative management scenarios and climate conditions, and we have applied this approach in the Wild Rivers Legacy Forest (WRLF area in northeastern Wisconsin. Three management scenarios were developed with input from local land managers, scientists, and conservation practitioners: 1 continuation of current management, 2 expanded working forest conservation easements, and 3 cooperative ecological forestry. Scenarios were modeled under current climate with contemporary probabilities of natural disturbance and under increased probability of windthrow and wildfire that may result from climate change in this region. All scenarios were modeled for 100 years using the VDDT/TELSA modeling suite. Results showed that landscape composition and configuration were relatively similar among scenarios, and that management had a stronger effect than increased probability of windthrow and wildfire. These findings suggest that the scale of the landscape analysis used here and the lack of differences in predominant management strategies between ownerships in this region play significant roles in scenario outcomes. The approach used here does not rely on complex mechanistic modeling of uncertain dynamics and can therefore be used as starting point for planning and further analysis.
Koch, Jon; Borg, John; Mattson, Abby; Olsen, Kris; Bahcall, James
2012-01-01
Objective. This in vitro study compared the flow pattern and shear stress of an irrigant induced by ultrasonic and polymer rotary finishing file activation in an acrylic root canal model. Flow visualization analysis was performed using an acrylic canal filled with a mixture of distilled water and rheoscopic fluid. The ultrasonic and polymer rotary finishing file were separately tested in the canal and activated in a static position and in a cyclical axial motion (up and down). Particle moveme...
An international effort to compare gas hydrate reservoir simulators
Energy Technology Data Exchange (ETDEWEB)
Wilder, J.W. [Akron Univ., Akron, OH (United States). Dept. of Theoretical and Applied Math; Moridis, G.J. [California Univ., Berkely, CA (United States). Earth Sciences Div., Lawrence Berkely National Lab.; Wilson, S.J. [Ryder Scott Co., Denver, CO (United States); Kurihara, M. [Japan Oil Engineering Co. Ltd., Tokyo (Japan); White, M.D. [Pacific Northwest National Laboratory Hydrology Group, Richland, WA (United States); Masuda, Y. [Tokyo Univ., Tokyo (Japan). Dept. of Geosystem Engineering; Anderson, B.J. [National Energy Technology Lab., Morgantown, WV (United States)]|[West Virginia Univ., Morgantown, WV (United States). Dept. of Chemical Engineering; Collett, T.S. [United States Geological Survey, Denver, CO (United States); Hunter, R.B. [ASRC Energy Services, Anchorage, AK (United States); Narita, H. [National Inst. of Advanced Industrial Science and Technology, MEthane hydrate Research Lab., Sapporo (Japan); Pooladi-Darvish, M. [Fekete Associates Inc., Calgary, AB (Canada); Rose, K.; Boswell, R. [National Energy Technology Lab., Morgantown, WV (United States)
2008-07-01
In this study, 5 different gas hydrate production scenarios were modeled by the CMG STARS, HydateResSim, MH-21 HYDRES, STOMP-HYD and the TOUGH+HYDRATE reservoir simulators for comparative purposes. The 5 problems ranged in complexity from 1 to 3 dimensional with radial symmetry, and in horizontal dimensions of 20 meters to 1 kilometer. The scenarios included (1) a base case with non-isothermal multi-fluid transition to equilibrium, (2) a base case with gas hydrate (closed-domain hydrate dissociation), (3) dissociation in a 1-D open domain, (4) gas hydrate dissociation in a one-dimensional radial domain, similarity solutions, (5) gas hydrate dissociation in a two-dimensional radial domain. The purpose of the study was to compare the world's leading gas hydrate reservoir simulators in an effort to improve the simulation capability of experimental and naturally occurring gas hydrate accumulations. The problem description and simulation results were presented for each scenario. The results of the first scenario indicated very close agreement among the simulators, suggesting that all address the basics of mass and heat transfer, as well as overall process of gas hydrate dissociation. The third scenario produced the initial divergence among the simulators. Other differences were noted in both scenario 4 and 5, resulting in significant corrections to algorithms within several of the simulators. The authors noted that it is unlikely that these improvements would have been identified without this comparative study due to a lack of real world data for validation purposes. It was concluded that the solution for gas hydrate production involves a combination of highly coupled fluid, heat and mass transport equations combined with the potential for formation or disappearance of multiple solid phases in the system. The physical and chemical properties of the rocks containing the gas hydrate depend on the amount of gas hydrate present in the system. Each modeling and
ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL
Климак, М.С.; Войтко, С.В.
2016-01-01
Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics
Directory of Open Access Journals (Sweden)
Edlund Stefan
2012-09-01
Full Text Available Abstract Background The role of the Anopheles vector in malaria transmission and the effect of climate on Anopheles populations are well established. Models of the impact of climate change on the global malaria burden now have access to high-resolution climate data, but malaria surveillance data tends to be less precise, making model calibration problematic. Measurement of malaria response to fluctuations in climate variables offers a way to address these difficulties. Given the demonstrated sensitivity of malaria transmission to vector capacity, this work tests response functions to fluctuations in land surface temperature and precipitation. Methods This study of regional sensitivity of malaria incidence to year-to-year climate variations used an extended Macdonald Ross compartmental disease model (to compute malaria incidence built on top of a global Anopheles vector capacity model (based on 10 years of satellite climate data. The predicted incidence was compared with estimates from the World Health Organization and the Malaria Atlas. The models and denominator data used are freely available through the Eclipse Foundation’s Spatiotemporal Epidemiological Modeller (STEM. Results Although the absolute scale factor relating reported malaria to absolute incidence is uncertain, there is a positive correlation between predicted and reported year-to-year variation in malaria burden with an averaged root mean square (RMS error of 25% comparing normalized incidence across 86 countries. Based on this, the proposed measure of sensitivity of malaria to variations in climate variables indicates locations where malaria is most likely to increase or decrease in response to specific climate factors. Bootstrapping measures the increased uncertainty in predicting malaria sensitivity when reporting is restricted to national level and an annual basis. Results indicate a potential 20x improvement in accuracy if data were available at the level ISO 3166–2
Progress in modeling and simulation.
Kindler, E
1998-01-01
For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.
Comparing numerically exact and modelled static friction
Directory of Open Access Journals (Sweden)
Krengel Dominik
2017-01-01
Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
FASTBUS simulation models in VHDL
International Nuclear Information System (INIS)
Appelquist, G.
1992-11-01
Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)
Model reduction for circuit simulation
Hinze, Michael; Maten, E Jan W Ter
2011-01-01
Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi
Modelling and Simulation of Wave Loads
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...
Modelling and Simulation of Wave Loads
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1985-01-01
velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...
Bot, G.P.A.
1989-01-01
A model is a representation of a real system to describe some properties i.e. internal factors of that system (out-puts) as function of some external factors (inputs). It is impossible to describe the relation between all internal factors (if even all internal factors could be defined) and all
Lamb, Richard L.
2016-02-01
Within the last 10 years, new tools for assisting in the teaching and learning of academic skills and content within the context of science have arisen. These new tools include multiple types of computer software and hardware to include (video) games. The purpose of this study was to examine and compare the effect of computer learning games in the form of three-dimensional serious educational games, two-dimensional online laboratories, and traditional lecture-based instruction in the context of student content learning in science. In particular, this study examines the impact of dimensionality, or the ability to move along the X-, Y-, and Z-axis in the games. Study subjects ( N = 551) were randomly selected using a stratified sampling technique. Independent strata subsamples were developed based upon the conditions of serious educational games, online laboratories, and lecture. The study also computationally models a potential mechanism of action and compares two- and three-dimensional learning environments. F test results suggest a significant difference for the main effect of condition across the factor of content gain score with large effect. Overall, comparisons using computational models suggest that three-dimensional serious educational games increase the level of success in learning as measured with content examinations through greater recruitment and attributional retraining of cognitive systems. The study supports assertions in the literature that the use of games in higher dimensions (i.e., three-dimensional versus two-dimensional) helps to increase student understanding of science concepts.
A VRLA battery simulation model
International Nuclear Information System (INIS)
Pascoe, Phillip E.; Anbuky, Adnan H.
2004-01-01
A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
Vehicle dynamics modeling and simulation
Schramm, Dieter; Bardini, Roberto
2014-01-01
The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.
Numerical simulation of Higgs models
International Nuclear Information System (INIS)
Jaster, A.
1995-10-01
The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)
Stochastic models: theory and simulation.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Plasma modelling and numerical simulation
International Nuclear Information System (INIS)
Van Dijk, J; Kroesen, G M W; Bogaerts, A
2009-01-01
Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)
Modeling and simulation goals and accomplishments
International Nuclear Information System (INIS)
Turinsky, P.
2013-01-01
The CASL (Consortium for Advanced Simulation of Light Water Reactors) mission is to develop and apply the Virtual Reactor simulator (VERA) to optimise nuclear power in terms of capital and operating costs, of nuclear waste production and of nuclear safety. An efficient and reliable virtual reactor simulator relies on 3-dimensional calculations, accurate physics models and code coupling. Advances in computer hardware, along with comparable advances in numerical solvers make the VERA project achievable. This series of slides details the VERA project and presents the specificities and performance of the codes involved in the project and ends by listing the computing needs
COMPARATIVE STUDY OF TERTIARY WASTEWATER TREATMENT BY COMPUTER SIMULATION
Directory of Open Access Journals (Sweden)
Stefania Iordache
2010-01-01
Full Text Available The aim of this work is to asses conditions for implementation of a Biological Nutrient Removal (BNR process in theWastewater Treatment Plant (WWTP of Moreni city (Romania. In order to meet the more increased environmentalregulations, the wastewater treatment plant that was studied, must update the actual treatment process and have tomodernize it. A comparative study was undertaken of the quality of effluents that could be obtained by implementationof biological nutrient removal process like A2/O (Anaerobic/Anoxic/Oxic and VIP (Virginia Plant Initiative aswastewater tertiary treatments. In order to asses the efficiency of the proposed treatment schemata based on the datamonitored at the studied WWTP, it were realized computer models of biological nutrient removal configurations basedon A2/O and VIP process. Computer simulation was realized using a well-known simulator, BioWin by EnviroSimAssociates Ltd. The simulation process allowed to obtain some data that can be used in design of a tertiary treatmentstage at Moreni WWTP, in order to increase the efficiency in operation.
Comparison of performance of simulation models for floor heating
DEFF Research Database (Denmark)
Weitzmann, Peter; Svendsen, Svend
2005-01-01
This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...
Model for Simulation Atmospheric Turbulence
DEFF Research Database (Denmark)
Lundtang Petersen, Erik
1976-01-01
A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....
Comparing TCV experimental VDE responses with DINA code simulations
Favez, J.-Y.; Khayrutdinov, R. R.; Lister, J. B.; Lukash, V. E.
2002-02-01
The DINA free-boundary equilibrium simulation code has been implemented for TCV, including the full TCV feedback and diagnostic systems. First results showed good agreement with control coil perturbations and correctly reproduced certain non-linear features in the experimental measurements. The latest DINA code simulations, presented in this paper, exploit discharges with different cross-sectional shapes and different vertical instability growth rates which were subjected to controlled vertical displacement events (VDEs), extending previous work with the DINA code on the DIII-D tokamak. The height of the TCV vessel allows observation of the non-linear evolution of the VDE growth rate as regions of different vertical field decay index are crossed. The vertical movement of the plasma is found to be well modelled. For most experiments, DINA reproduces the S-shape of the vertical displacement in TCV with excellent precision. This behaviour cannot be modelled using linear time-independent models because of the predominant exponential shape due to the unstable pole of any linear time-independent model. The other most common equilibrium parameters like the plasma current Ip, the elongation κ, the triangularity δ, the safety factor q, the ratio between the averaged plasma kinetic pressure and the pressure of the poloidal magnetic field at the edge of the plasma βp, and the internal self inductance li also show acceptable agreement. The evolution of the growth rate γ is estimated and compared with the evolution of the closed-loop growth rate calculated with the RZIP linear model, confirming the origin of the observed behaviour.
Comparing TCV experimental VDE responses with DINA code simulations
International Nuclear Information System (INIS)
Favez, J.Y.; Khayrutdinov, J.B.; Lister, J.B.; Lukash, V.E.
2001-10-01
The DINA free-boundary equilibrium simulation code has been implemented for TCV, including the full TCV feedback and diagnostic systems. First results showed good agreement with control coil perturbations and correctly reproduced certain non-linear features in the experimental measurements. The latest DINA code simulations, presented in this paper, exploit discharges with different cross- sectional shapes and different vertical instability growth rates which were subjected to controlled Vertical Displacement Events, extending previous work with the DINA code on the DIII-D tokamak. The height of the TCV vessel allows observation of the non- linear evolution of the VDE growth rate as regions of different vertical field decay index are crossed. The vertical movement of the plasma is found to be well modelled. For most experiments, DINA reproduces the S-shape of the vertical displacement in TCV with excellent precision. This behaviour cannot be modelled using linear time-independent models because of the predominant exponential shape due to the unstable pole of any linear time-independent model. The other most common equilibrium parameters like the plasma current Ip, the elongation K, the triangularity d, the safety factor q, the ratio between the averaged plasma kinetic pressure and the pressure of the poloidal magnetic field at the edge of the plasma bp and the internal self inductance l also show acceptable agreement. The evolution of the growth rate g is estimated and compared with the evolution of the closed loop growth rate calculated with the RZIP linear model, confirming the origin of the observed behaviour. (author)
Directory of Open Access Journals (Sweden)
Yingyu Zhou
2018-01-01
Full Text Available The active compounds in Acanthopanax senticosus (AS have different pharmacokinetic characteristics in mouse models. Cmax and AUC of Acanthopanax senticosus polysaccharides (ASPS were significantly reduced in radiation-injured mice, suggesting that the blood flow of mouse was blocked or slowed, due to the pathological state of ischemia and hypoxia, which are caused by radiation. In contrast, the ability of various metabolizing enzymes to inactivate, capacity of biofilm transport decrease, and lessening of renal blood flow accounts for radiation, resulting in the accumulation of syringin and eleutheroside E in the irradiated mouse. Therefore, there were higher pharmacokinetic parameters—AUC, MRT, and t1/2 of the two compounds in radiation-injured mouse, when compared with normal mouse. In order to investigate the intrinsic mechanism of AS on radiation injury, AS extract’s protective effects on brain, the main part of mouse that suffered from radiation, were explored. The function of AS extract in repressing expression changes of radiation response proteins in prefrontal cortex (PFC of mouse brain included tubulin protein family (α-, β-tubulin subunits, dihydropyrimidinase-related protein 2 (CRMP2, γ-actin, 14-3-3 protein family (14-3-3ζ, ε, heat shock protein 90β (HSP90β, and enolase 2. The results demonstrated the AS extract had positive effects on nerve cells’ structure, adhesion, locomotion, fission, and phagocytosis, through regulating various action pathways, such as Hippo, phagosome, PI3K/Akt (phosphatidylinositol 3 kinase/protein kinase B, Neurotrophin, Rap1 (Ras-related protein RAP-1A, gap junction glycolysis/gluconeogenesis, and HIF-1 (Hypoxia-inducible factor 1 signaling pathways to maintain normal mouse neurological activity. All of the results indicated that AS may be a promising alternative medicine for the treatment of radiation injury in mouse brain. It would be tested that whether the bioactive ingredients of AS could
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Validation process of simulation model
International Nuclear Information System (INIS)
San Isidro, M. J.
1998-01-01
It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs
Modeling and Simulation for Safeguards
International Nuclear Information System (INIS)
Swinhoe, Martyn T.
2012-01-01
The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.
Modeling and Simulation of Nanoindentation
Huang, Sixie; Zhou, Caizhi
2017-11-01
Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.
Modeling salmonella Dublin into the dairy herd simulation model Simherd
DEFF Research Database (Denmark)
Kudahl, Anne Braad
2010-01-01
Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...
Assessment of Molecular Modeling & Simulation
Energy Technology Data Exchange (ETDEWEB)
None
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.
NRTA simulation by modeling PFPF
International Nuclear Information System (INIS)
Asano, Takashi; Fujiwara, Shigeo; Takahashi, Saburo; Shibata, Junichi; Totsu, Noriko
2003-01-01
In PFPF, NRTA system has been applied since 1991. It has been confirmed by evaluating facility material accountancy data provided from operator in each IIV that a significant MUF was not generated. In case of throughput of PFPF scale, MUF can be evaluated with a sufficient detection probability by the present NRTA evaluation manner. However, by increasing of throughput, the uncertainty of material accountancy will increase, and the detection probability will decline. The relationship between increasing of throughput and declining of detection probability and the maximum throughput upon application of following measures with a sufficient detection probability were evaluated by simulation of NRTA system. This simulation was performed by modeling of PFPF. Measures for increasing detection probability are shown as follows. Shortening of the evaluation interval. Segmentation of evaluation area. This report shows the results of these simulations. (author)
Comparing three methods for participatory simulation of hospital work systems
DEFF Research Database (Denmark)
Broberg, Ole; Andersen, Simone Nyholm
Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...
Regional model simulations of New Zealand climate
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
Repository simulation model: Final report
International Nuclear Information System (INIS)
1988-03-01
This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs
Standard for Models and Simulations
Steele, Martin J.
2016-01-01
This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.
A novel approach to evaluate and compare computational snow avalanche simulation
Directory of Open Access Journals (Sweden)
J.-T. Fischer
2013-06-01
Full Text Available An innovative approach for the analysis and interpretation of snow avalanche simulation in three dimensional terrain is presented. Snow avalanche simulation software is used as a supporting tool in hazard mapping. When performing a high number of simulation runs the user is confronted with a considerable amount of simulation results. The objective of this work is to establish an objective, model independent framework to evaluate and compare results of different simulation approaches with respect to indicators of practical relevance, providing an answer to the important questions: how far and how destructive does an avalanche move down slope. For this purpose the Automated Indicator based Model Evaluation and Comparison (AIMEC method is introduced. It operates on a coordinate system which follows a given avalanche path. A multitude of simulation runs is performed with the snow avalanche simulation software SamosAT (Snow Avalanche MOdelling and Simulation – Advanced Technology. The variability of pressure-based run out and avalanche destructiveness along the path is investigated for multiple simulation runs, varying release volume and model parameters. With this, results of deterministic simulation software are processed and analysed by means of statistical methods. Uncertainties originating from varying input conditions, model parameters or the different model implementations are assessed. The results show that AIMEC contributes to the interpretation of avalanche simulations with a broad applicability in model evaluation, comparison as well as examination of scenario variations.
Lamb, Richard L.
2016-01-01
Within the last 10 years, new tools for assisting in the teaching and learning of academic skills and content within the context of science have arisen. These new tools include multiple types of computer software and hardware to include (video) games. The purpose of this study was to examine and compare the effect of computer learning games in the…
Modeling and simulation of the SDC data collection chip
International Nuclear Information System (INIS)
Hughes, E.; Haney, M.; Golin, E.; Jones, L.; Knapp, D.; Tharakan, G.; Downing, R.
1992-01-01
This paper describes modeling and simulation of the Data Collection Chip (DCC) design for the Solenoidal Detector Collaboration (SDC). Models of the DCC written in Verilog and VHDL are described, and results are presented. The models have been simulated to study queue depth requirements and to compare control feedback alternatives. Insight into the management of models and simulation tools is given. Finally, techniques useful in the design process for data acquisition systems are discussed
Verifying and Validating Simulation Models
Energy Technology Data Exchange (ETDEWEB)
Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-23
This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.
Advances in Intelligent Modelling and Simulation Simulation Tools and Applications
Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek
2012-01-01
The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...
MODELLING, SIMULATING AND OPTIMIZING BOILERS
DEFF Research Database (Denmark)
Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels
2004-01-01
In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...
Nuclear reactor core modelling in multifunctional simulators
International Nuclear Information System (INIS)
Puska, E.K.
1999-01-01
studied to assess the possibilities for using three-dimensional cores in training simulators. The core model results have been compared with the Loviisa WWER-type plant measurement data in steady state and in some transients. Hypothetical control rod withdrawal, ejection and boron dilution transients have been calculated with various three-dimensional core models for the Loviisa WWER-440 core. Several ATWS analyses for the WWER-1000/91 plant have been performed using the three-dimensional core model. In this context, the results of APROS have been compared in detail with the results of the HEXTRAN code. The three-dimensional Olkiluoto BWR-type core model has been used for transient calculation and for severe accident re-criticality studies. The one-dimensional core model is at present used in several plant analyser and training simulator applications and it has been used extensively for safety analyses in the Loviisa WWER-440 plant modernisation project. (orig.)
Nuclear reactor core modelling in multifunctional simulators
Energy Technology Data Exchange (ETDEWEB)
Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)
1999-06-01
studied to assess the possibilities for using three-dimensional cores in training simulators. The core model results have been compared with the Loviisa WWER-type plant measurement data in steady state and in some transients. Hypothetical control rod withdrawal, ejection and boron dilution transients have been calculated with various three-dimensional core models for the Loviisa WWER-440 core. Several ATWS analyses for the WWER-1000/91 plant have been performed using the three-dimensional core model. In this context, the results of APROS have been compared in detail with the results of the HEXTRAN code. The three-dimensional Olkiluoto BWR-type core model has been used for transient calculation and for severe accident re-criticality studies. The one-dimensional core model is at present used in several plant analyser and training simulator applications and it has been used extensively for safety analyses in the Loviisa WWER-440 plant modernisation project. (orig.) 75 refs. The thesis includes also eight previous publications by author
SEMI Modeling and Simulation Roadmap
Energy Technology Data Exchange (ETDEWEB)
Hermina, W.L.
2000-10-02
With the exponential growth in the power of computing hardware and software, modeling and simulation is becoming a key enabler for the rapid design of reliable Microsystems. One vision of the future microsystem design process would include the following primary software capabilities: (1) The development of 3D part design, through standard CAD packages, with automatic design rule checks that guarantee the manufacturability and performance of the microsystem. (2) Automatic mesh generation, for 3D parts as manufactured, that permits computational simulation of the process steps, and the performance and reliability analysis for the final microsystem. (3) Computer generated 2D layouts for process steps that utilize detailed process models to generate the layout and process parameter recipe required to achieve the desired 3D part. (4) Science-based computational tools that can simulate the process physics, and the coupled thermal, fluid, structural, solid mechanics, electromagnetic and material response governing the performance and reliability of the microsystem. (5) Visualization software that permits the rapid visualization of 3D parts including cross-sectional maps, performance and reliability analysis results, and process simulation results. In addition to these desired software capabilities, a desired computing infrastructure would include massively parallel computers that enable rapid high-fidelity analysis, coupled with networked compute servers that permit computing at a distance. We now discuss the individual computational components that are required to achieve this vision. There are three primary areas of focus: design capabilities, science-based capabilities and computing infrastructure. Within each of these areas, there are several key capability requirements.
Photovoltaic array performance simulation models
Energy Technology Data Exchange (ETDEWEB)
Menicucci, D. F.
1986-09-15
The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.
Modeling, Simulation and Position Control of 3DOF Articulated Manipulator
Directory of Open Access Journals (Sweden)
Hossein Sadegh Lafmejani
2014-08-01
Full Text Available In this paper, the modeling, simulation and control of 3 degrees of freedom articulated robotic manipulator have been studied. First, we extracted kinematics and dynamics equations of the mentioned manipulator by using the Lagrange method. In order to validate the analytical model of the manipulator we compared the model simulated in the simulation environment of Matlab with the model was simulated with the SimMechanics toolbox. A sample path has been designed for analyzing the tracking subject. The system has been linearized with feedback linearization and then a PID controller was applied to track a reference trajectory. Finally, the control results have been compared with a nonlinear PID controller.
Advances in NLTE Modeling for Integrated Simulations
Energy Technology Data Exchange (ETDEWEB)
Scott, H A; Hansen, S B
2009-07-08
The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.
Comparing DINA code simulations with TCV experimental plasma equilibrium responses
International Nuclear Information System (INIS)
Khayrutdinov, R.R.; Lister, J.B.; Lukash, V.E.; Wainwright, J.P.
2000-08-01
The DINA non-linear time dependent simulation code has been validated against an extensive set of plasma equilibrium response experiments carried out on the TCV tokamak. Limited and diverted plasmas are found to be well modelled during the plasma current flat top. In some simulations the application of the PF coil voltage stimulation pulse sufficiently changed the plasma equilibrium that the vertical position feedback control loop became unstable. This behaviour was also found in the experimental work, and cannot be reproduced using linear time-independent models. A single null diverted plasma discharge was also simulated from start-up to shut-down and the results were found to accurately reproduce their experimental equivalents. The most significant difference noted was the penetration time of the poloidal flux, leading to a delayed onset of sawtoothing in the DINA simulation. The complete set of frequency stimulation experiments used to measure the open loop tokamak plasma equilibrium response was also simulated using DINA and the results were analysed in an identical fashion to the experimental data. The frequency response of the DINA simulations agrees with the experimental results. Comparisons with linear models are also discussed to identify areas of good and only occasionally less good agreement. (author)
Mathematical models for photovoltaic solar panel simulation
Energy Technology Data Exchange (ETDEWEB)
Santos, Jose Airton A. dos; Gnoatto, Estor; Fischborn, Marcos; Kavanagh, Edward [Universidade Tecnologica Federal do Parana (UTFPR), Medianeira, PR (Brazil)], Emails: airton@utfpr.edu.br, gnoatto@utfpr.edu.br, fisch@utfpr.edu.br, kavanagh@utfpr.edu.br
2008-07-01
A photovoltaic generator is subject to several variations of solar intensity, ambient temperature or load, that change your point of operation. This way, your behavior should be analyzed by such alterations, to optimize your operation. The present work sought to simulate a photovoltaic generator, of polycrystalline silicon, by characteristics supplied by the manufacturer, and to compare the results of two mathematical models with obtained values of field, in the city of Cascavel, for a period of one year. (author)
A SIMULATION MODEL OF THE GAS COMPLEX
Directory of Open Access Journals (Sweden)
Sokolova G. E.
2016-06-01
Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.
Impulse pumping modelling and simulation
International Nuclear Information System (INIS)
Pierre, B; Gudmundsson, J S
2010-01-01
Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.
Simulation model of a PWR power plant
International Nuclear Information System (INIS)
Larsen, N.
1987-03-01
A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)
A New Model for Simulating TSS Washoff in Urban Areas
Directory of Open Access Journals (Sweden)
E. Crobeddu
2011-01-01
Full Text Available This paper presents the formulation and validation of the conceptual Runoff Quality Simulation Model (RQSM that was developed to simulate the erosion and transport of solid particles in urban areas. The RQSM assumes that solid particle accumulation on pervious and impervious areas is infinite. The RQSM simulates soil erosion using rainfall kinetic energy and solid particle transport with linear system theory. A sensitivity analysis was conducted on the RQSM to show the influence of each parameter on the simulated load. Total suspended solid (TSS loads monitored at the outlet of the borough of Verdun in Canada and at three catchment outlets of the City of Champaign in the United States were used to validate the RQSM. TSS loads simulated by the RQSM were compared to measured loads and to loads simulated by the Rating Curve model and the Exponential model of the SWMM software. The simulation performance of the RQSM was comparable to the Exponential and Rating Curve models.
Numerical model simulation of atmospheric coolant plumes
International Nuclear Information System (INIS)
Gaillard, P.
1980-01-01
The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr
Nonlinear friction model for servo press simulation
Ma, Ninshu; Sugitomo, Nobuhiko; Kyuno, Takunori; Tamura, Shintaro; Naka, Tetsuo
2013-12-01
The friction coefficient was measured under an idealized condition for a pulse servo motion. The measured friction coefficient and its changing with both sliding distance and a pulse motion showed that the friction resistance can be reduced due to the re-lubrication during unloading process of the pulse servo motion. Based on the measured friction coefficient and its changes with sliding distance and re-lubrication of oil, a nonlinear friction model was developed. Using the newly developed the nonlinear friction model, a deep draw simulation was performed and the formability was evaluated. The results were compared with experimental ones and the effectiveness was verified.
Quantitative and comparative visualization applied to cosmological simulations
International Nuclear Information System (INIS)
Ahrens, James; Heitmann, Katrin; Habib, Salman; Ankeny, Lee; McCormick, Patrick; Inman, Jeff; Armstrong, Ryan; Ma, Kwan-Liu
2006-01-01
Cosmological simulations follow the formation of nonlinear structure in dark and luminous matter. The associated simulation volumes and dynamic range are very large, making visualization both a necessary and challenging aspect of the analysis of these datasets. Our goal is to understand sources of inconsistency between different simulation codes that are started from the same initial conditions. Quantitative visualization supports the definition and reasoning about analytically defined features of interest. Comparative visualization supports the ability to visually study, side by side, multiple related visualizations of these simulations. For instance, a scientist can visually distinguish that there are fewer halos (localized lumps of tracer particles) in low-density regions for one simulation code out of a collection. This qualitative result will enable the scientist to develop a hypothesis, such as loss of halos in low-density regions due to limited resolution, to explain the inconsistency between the different simulations. Quantitative support then allows one to confirm or reject the hypothesis. If the hypothesis is rejected, this step may lead to new insights and a new hypothesis, not available from the purely qualitative analysis. We will present methods to significantly improve the Scientific analysis process by incorporating quantitative analysis as the driver for visualization. Aspects of this work are included as part of two visualization tools, ParaView, an open-source large data visualization tool, and Scout, an analysis-language based, hardware-accelerated visualization tool
Galaxy Alignments: Theory, Modelling & Simulations
Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais
2015-11-01
The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.
Comparing CTH simulations and experiments on explosively loaded rings
Braithwaite, C. H.; Aydelotte, Brady; Collins, Adam; Thadhani, Naresh; Williamson, David Martin
2012-03-01
A series of experiments were conducted on explosively loaded metallic rings for the purpose of studying fragmentation. In addition to the collection of fragments for analysis, the radial velocity of the expanding ring was measured with photon Doppler velocimetry (PDV) and the arrangement was imaged using high speed photography. Both the ring material and the material used as the explosive container were altered and the results compared with simulations performed in CTH. Good agreement was found between the simulations and the experiments. The maximum radial velocity attained was approximately 380 m/s, which was achieved through loading with a 5g PETN based charge.
Comparison of piping models for digital power plant simulators
International Nuclear Information System (INIS)
Sowers, G.W.
1979-08-01
Two piping models intended for use in a digital power plant simulator are compared. One is a finite difference approximation to the partial differential equation called PIPE, and the other is a function subroutine that acts as a delay operator called PDELAY. The two models are compared with respect to accuracy and execution time. In addition, the stability of the PIPE model is determined. The PDELAY model is found to execute faster than the PIPE model with comparable accuracy
Radiation Modeling with Direct Simulation Monte Carlo
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Deep Drawing Simulations With Different Polycrystalline Models
Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie
2004-06-01
The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.
Comparing Expert and Novice Driving Behavior in a Driving Simulator
Directory of Open Access Journals (Sweden)
Hiran B. Ekanayake
2014-02-01
Full Text Available This paper presents a study focused on comparing driving behavior of expert and novice drivers in a mid-range driving simulator with the intention of evaluating the validity of driving simulators for driver training. For the investigation, measurements of performance, psychophysiological measurements, and self-reported user experience under different conditions of driving tracks and driving sessions were analyzed. We calculated correlations between quantitative and qualitative measures to enhance the reliability of the findings. The experiment was conducted involving 14 experienced drivers and 17 novice drivers. The results indicate that driving behaviors of expert and novice drivers differ from each other in several ways but it heavily depends on the characteristics of the task. Moreover, our belief is that the analytical framework proposed in this paper can be used as a tool for selecting appropriate driving tasks as well as for evaluating driving performance in driving simulators.
THE MARK I BUSINESS SYSTEM SIMULATION MODEL
of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)
Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.
Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd
2018-02-01
There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Benchmark simulation models, quo vadis?
Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.
Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models
CSIR Research Space (South Africa)
Kruger, FJ
1985-03-01
Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...
An introduction to enterprise modeling and simulation
Energy Technology Data Exchange (ETDEWEB)
Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group
1996-09-01
As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.
Simulation and Modeling Methodologies, Technologies and Applications
Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2014-01-01
This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).
International Nuclear Information System (INIS)
Araiza M, E.; Ortiz V, J.; Martinez C, E.; Amador G, R.; Castillo D, R.
2016-09-01
This work presents the results of the simulation of the instantaneous closing of the water hammer, of a recirculation loop using two different arrangements in the loops. One of these arrangements corresponds to the traditional model that uses only two jet pumps to simulate the twenty pumps of the two recirculation loops of a BWR. The second nodalization models each of the ten jet pumps of each recirculation loop. The results obtained from the execution of both models are compared, using important variables such as pressures and mass costs for the same components of both models. In addition, the maximum pressure value generated on the pipe located upstream of the water hammer, relative to the design pressure of the pipe, is compared for each arrangement. (Author)
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Modelling and simulation of railway cable systems
Energy Technology Data Exchange (ETDEWEB)
Teichelmann, G.; Schaub, M.; Simeon, B. [Technische Univ. Muenchen, Garching (Germany). Zentrum Mathematik M2
2005-12-15
Mathematical models and numerical methods for the computation of both static equilibria and dynamic oscillations of railroad catenaries are derived and analyzed. These cable systems form a complex network of string and beam elements and lead to coupled partial differential equations in space and time where constraints and corresponding Lagrange multipliers express the interaction between carrier, contact wire, and pantograph head. For computing static equilibria, three different algorithms are presented and compared, while the dynamic case is treated by a finite element method in space, combined with stabilized time integration of the resulting differential algebraic system. Simulation examples based on reference data from industry illustrate the potential of such computational tools. (orig.)
Comparing Expert Driving Behavior in Real World and Simulator Contexts
Directory of Open Access Journals (Sweden)
Hiran B. Ekanayake
2013-01-01
Full Text Available Computer games are increasingly used for purposes beyond mere entertainment, and current hi-tech simulators can provide quite, naturalistic contexts for purposes such as traffic education. One of the critical concerns in this area is the validity or transferability of acquired skills from a simulator to the real world context. In this paper, we present our work in which we compared driving in the real world with that in the simulator at two levels, that is, by using performance measures alone, and by combining psychophysiological measures with performance measures. For our study, we gathered data using questionnaires as well as by logging vehicle dynamics, environmental conditions, video data, and users' psychophysiological measurements. For the analysis, we used several novel approaches such as scatter plots to visualize driving tasks of different contexts and to obtain vigilance estimators from electroencephalographic (EEG data in order to obtain important results about the differences between the driving in the two contexts. Our belief is that both experimental procedures and findings of our experiment are very important to the field of serious games concerning how to evaluate the fitness of driving simulators and measure driving performance.
Simulation modeling and analysis with Arena
Altiok, Tayfur
2007-01-01
Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...
Comparative simulation of a fluidised bed reformer using industrial process simulators
Bashiri, Hamed; Sotudeh-Gharebagh, Rahmat; Sarvar-Amini, Amin; Haghtalab, Ali; Mostoufi, Navid
2016-08-01
A simulation model is developed by commercial simulators in order to predict the performance of a fluidised bed reformer. As many physical and chemical phenomena take place in the reformer, two sub-models (hydrodynamic and reaction sub-models) are needed. The hydrodynamic sub-model is based on the dynamic two-phase model and the reaction sub-model is derived from the literature. In the overall model, the bed is divided into several sections. In each section, the flow of the gas is considered as plug flow through the bubble phase and perfectly mixed through the emulsion phase. Experimental data from the literature were used to validate the model. Close agreement was found between the model of both ASPEN Plus (ASPEN PLUS 2004 ©) and HYSYS (ASPEN HYSYS 2004 ©) and the experimental data using various sectioning of the reactor ranged from one to four. The experimental conversion lies between one and four sections as expected. The model proposed in this work can be used as a framework in developing the complicated models for non-ideal reactors inside of the process simulators.
A comparative analysis of currently used microscopic and macroscopic traffic simulation software
International Nuclear Information System (INIS)
Ratrout Nedal T; Rahman Syed Masiur
2009-01-01
The significant advancements of information technology have contributed to increased development of traffic simulation models. These include microscopic models and broadening the areas of applications ranging from the modeling of specific components of the transportation system to a whole network having different kinds of intersections and links, even in a few cases combining travel demand models. This paper mainly reviews the features of traditionally used macroscopic and microscopic traffic simulation models along with a comparative analysis focusing on freeway operations, urban congested networks, project-level emission modeling, and variations in delay and capacity estimates. The models AIMSUN, CORSIM, and VISSIM are found to be suitable for congested arterials and freeways, and integrated networks of freeways and surface streets. The features of AIMSUN are favorable for creating large urban and regional networks. The models AIMSUN, PARAMICS, INTEGRATION, and CORSIM are potentially useful for Intelligent Transportation System (ITS). There are a few simulation models which are developed focusing on ITS such as MITSIMLab. The TRAF-family and HUTSIM models attempt a system-level simulation approach and develop open environments where several analysis models can be used interactively to solve traffic simulation problems. In Saudi Arabia, use of simulation software with the capability of analyzing an integrated system of freeways and surface streets has not been reported. Calibration and validation of simulation software either for freeways or surface streets has been reported. This paper suggests that researchers evaluate the state-of-the-art simulation tools and find out the suitable tools or approaches for the local conditions of Saudi Arabia. (author)
Comparing the Discrete and Continuous Logistic Models
Gordon, Sheldon P.
2008-01-01
The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)
Network Modeling and Simulation A Practical Perspective
Guizani, Mohsen; Khan, Bilal
2010-01-01
Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate
Simulating auditory and visual sensorineural prostheses: a comparative review
Hallum, L. E.; Dagnelie, G.; Suaning, G. J.; Lovell, N. H.
2007-03-01
Microelectronic vision prosthesis proposes to render luminous spots (so-called phosphenes) in the visual field of the otherwise blind subject by way of an implanted array of stimulating electrodes, and in doing so restore some spatial vision. There are now many research teams worldwide working towards a therapeutic device, analogous to the cochlear implant, for the profoundly blind. Despite the similarities between the cochlear implant and vision prostheses, there are few instances in the literature where the two approaches are compared and contrasted with a mind to informing the science and engineering of the latter. This is the focus of the present review; specifically, our interest is psychophysics and signal processing. Firstly, we examine the cochlear implant, and review a handful of psychophysical work: the acoustic simulation of cochlear implants and the method used. We focus on the use of normally hearing subjects (played coloured noise bands or sine waves) as a means of investigating cochlear-implant efficacy and speech processing algorithms. These results provide guidance to vision researchers, for they address the interpretation of simulation data, and flag key areas, such as 'artificial' perception in the presence of noise, that require experimental work in coming years. Secondly, we provide an up-to-date review of the body of analogous psychophysical work: the visual simulation, involving normal observers, of microelectronic vision prosthesis. These simulations allow predictions as to the likely clinical efficacy of the prosthesis; indeed, results to date suggest that a number on the order of 100 implanted electrodes will afford subjects mobility and recognition of faces (and other complex stimuli), while even fewer electrodes facilitate reading printed text and very simple visuomanual tasks. Further, the simulations allow investigations of image and signal processing strategies, plus they provide researchers in the field, and other interested persons
Modelling and simulation of a heat exchanger
Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.
1991-01-01
Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.
[Modeling and Simulation of Spectral Polarimetric BRDF].
Ling, Jin-jiang; Li, Gang; Zhang, Ren-bin; Tang, Qian; Ye, Qiu
2016-01-01
Under the conditions of the polarized light, The reflective surface of the object is affected by many factors, refractive index, surface roughness, and so the angle of incidence. For the rough surface in the different wavelengths of light exhibit different reflection characteristics of polarization, a spectral polarimetric BRDF based on Kirchhof theory is proposee. The spectral model of complex refraction index is combined with refraction index and extinction coefficient spectral model which were got by using the known complex refraction index at different value. Then get the spectral model of surface roughness derived from the classical surface roughness measuring method combined with the Fresnel reflection function. Take the spectral model of refraction index and roughness into the BRDF model, then the spectral polarimetirc BRDF model is proposed. Compare the simulation results of the refractive index varies with wavelength, roughness is constant, the refraction index and roughness both vary with wavelength and origin model with other papers, it shows that, the spectral polarimetric BRDF model can show the polarization characteristics of the surface accurately, and can provide a reliable basis for the application of polarization remote sensing, and other aspects of the classification of substances.
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
Impact of reactive settler models on simulated WWTP performance
DEFF Research Database (Denmark)
Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.
2006-01-01
for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....
Stabilising the global greenhouse. A simulation model
International Nuclear Information System (INIS)
Michaelis, P.
1993-01-01
This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecolocially determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically specified and adapted to a dynamic Gams-algorithm. By various simulation runs for the period of 1990 to 2110, the economics of greenhouse gas accumulation are explored. In particular, the long-run cost associated with the above stabilisation target are evaluated for three different policy scenarios: i) A comprehensive approach that covers all major greenhouse gases simultaneously, ii) a piecemeal approach that is limited to reducing CO 2 emissions, and iii) a ten-year moratorium that postpones abatement effort until new scientific evidence on the greenhouse effect will become available. Comparing the simulation results suggests that a piecemeal approach would considerably increase total cost, whereas a ten-year moratorium might be reasonable even if the probability of 'good news' is comparatively small. (orig.)
Lattice simulation of 2d Gross-Neveu-type models
International Nuclear Information System (INIS)
Limmer, M.; Gattringer, C.; Hermann, V.
2006-01-01
Full text: We discuss a Monte Carlo simulation of 2d Gross-Neveu-type models on the lattice. The four-Fermi interaction is written as a Gaussian integral with an auxiliary field and the fermion determinant is included by reweighting. We present results for bulk quantities and correlators and compare them to a simulation using a fermion-loop representation. (author)
Modeling and Simulation of Low Voltage Arcs
Ghezzi, L.; Balestrero, A.
2010-01-01
Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical
A Comparative of business process modelling techniques
Tangkawarow, I. R. H. T.; Waworuntu, J.
2016-04-01
In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.
Comparative evaluation of photovoltaic MPP trackers: A simulated approach
Directory of Open Access Journals (Sweden)
Barnam Jyoti Saharia
2016-12-01
Full Text Available This paper makes a comparative assessment of three popular maximum power point tracking (MPPT algorithms used in photovoltaic power generation. A 120 Wp PV module is taken as reference for the study that is connected to a suitable resistive load by a boost converter. Two profiles of variation of solar insolation at fixed temperature and varying temperature at fixed solar insolation are taken to test the tracking efficiency of three MPPT algorithms based on the perturb and observe (P&O, Fuzzy logic, and Neural Network techniques. MATLAB/SIMULINK simulation software is used for assessment, and the results indicate that the fuzzy logic-based tracker presents better tracking effectiveness to variations in both solar insolation and temperature profiles when compared to P&O technique and Neural Network-based technique.
Application of Hidden Markov Models in Biomolecular Simulations.
Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar
2017-01-01
Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.
Efficient Turbulence Modeling for CFD Wake Simulations
DEFF Research Database (Denmark)
van der Laan, Paul
Wind turbine wakes can cause 10-20% annual energy losses in wind farms, and wake turbulence can decrease the lifetime of wind turbine blades. One way of estimating these effects is the use of computational fluid dynamics (CFD) to simulate wind turbines wakes in the atmospheric boundary layer. Since...... this flow is in the high Reynolds number regime, it is mainly dictated by turbulence. As a result, the turbulence modeling in CFD dominates the wake characteristics, especially in Reynolds-averaged Navier-Stokes (RANS). The present work is dedicated to study and develop RANS-based turbulence models...... verified with a grid dependency study. With respect to the standard k-ε EVM, the k-ε- fp EVM compares better with measurements of the velocity deficit, especially in the near wake, which translates to improved power deficits of the first wind turbines in a row. When the CFD metholody is applied to a large...
Model improvements to simulate charging in SEM
Arat, K. T.; Klimpel, T.; Hagen, C. W.
2018-03-01
Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.
Dispersion modeling by kinematic simulation: Cloud dispersion model
International Nuclear Information System (INIS)
Fung, J C H; Perkins, R J
2008-01-01
A new technique has been developed to compute mean and fluctuating concentrations in complex turbulent flows (tidal current near a coast and deep ocean). An initial distribution of material is discretized into any small clouds which are advected by a combination of the mean flow and large scale turbulence. The turbulence can be simulated either by kinematic simulation (KS) or direct numerical simulation. The clouds also diffuse relative to their centroids; the statistics for this are obtained from a separate calculation of the growth of individual clouds in small scale turbulence, generated by KS. The ensemble of discrete clouds is periodically re-discretized, to limit the size of the small clouds and prevent overlapping. The model is illustrated with simulations of dispersion in uniform flow, and the results are compared with analytic, steady state solutions. The aim of this study is to understand how pollutants disperses in a turbulent flow through a numerical simulation of fluid particle motion in a random flow field generated by Fourier modes. Although this homogeneous turbulent is rather a 'simple' flow, it represents a building block toward understanding pollutant dispersion in more complex flow. The results presented here are preliminary in nature, but we expect that similar qualitative results should be observed in a genuine turbulent flow.
COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS
Sandeep Kaur*
2017-01-01
No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need
DEFF Research Database (Denmark)
Edelfeldt, Stina; Fritzson, Peter
2008-01-01
with Modelica 2.1 (Wiley-IEEE Press, USA, 2004).] and an associated tool. The differences and similarities between the MathModelica Model Editor and three other ecological modelling tools have also been evaluated. The results show that the models can well be modelled and simulated in the MathModelica Model...... Editor, and that nitrogen decrease in a constructed treatment wetland should be described and simulated using the Nitrification/Denitrification model as this model has the highest overall quality score and provides a more variable environment.......In this paper, two ecological models of nitrogen processes in treatment wetlands have been evaluated and compared. These models were implemented, simulated, and visualized using the Modelica modelling and simulation language [P. Fritzson, Principles of Object-Oriented Modelling and Simulation...
Simulation modeling for the health care manager.
Kennedy, Michael H
2009-01-01
This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.
Protein Simulation Data in the Relational Model.
Simms, Andrew M; Daggett, Valerie
2012-10-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.
Comparing the line broadened quasilinear model to Vlasov code
International Nuclear Information System (INIS)
Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.
2014-01-01
The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations
Comparing the line broadened quasilinear model to Vlasov code
Energy Technology Data Exchange (ETDEWEB)
Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)
2014-03-15
The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.
Comparing the line broadened quasilinear model to Vlasov code
Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.
2014-03-01
The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.
DEFF Research Database (Denmark)
Ibrom, A.; Jarvis, P.G.; Clement, R.
2006-01-01
-photosynthetically-active-radiation-induced biophysical variability in the simulated Pg. Analysis of residuals identified only small systematic differences between the modeled flux estimates and turbulent flux measurements at high vapor pressure saturation deficits. The merits and limitations of comparative analysis for quality evaluation of both...
Comparing the IRT Pre-equating and Section Pre-equating: A Simulation Study.
Hwang, Chi-en; Cleary, T. Anne
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Modeling and simulation of blood collection systems.
Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier
2012-03-01
This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.
Modeling and Simulation of Matrix Converter
DEFF Research Database (Denmark)
Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede
2005-01-01
This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...
Modeling and simulation of Indus-2 RF feedback control system
International Nuclear Information System (INIS)
Sharma, D.; Bagduwal, P.S.; Tiwari, N.; Lad, M.; Hannurkar, P.R.
2012-01-01
Indus-2 synchrotron radiation source has four RF stations along with their feedback control systems. For higher beam energy and current operation amplitude and phase feedback control systems of Indus-2 are being upgraded. To understand the behaviour of amplitude and phase control loop under different operating conditions, modelling and simulation of RF feedback control system is done. RF cavity baseband I/Q model has been created due to its close correspondence with actual implementation and better computational efficiency which makes the simulation faster. Correspondence between cavity baseband and RF model is confirmed by comparing their simulation results. Low Level RF (LLRF) feedback control system simulation is done using the same cavity baseband I/Q model. Error signals are intentionally generated and response of the closed loop system is observed. Simulation will help us in optimizing parameters of upgraded LLRF system for higher beam energy and current operation. (author)
Is it Worth Comparing Different Bankruptcy Models?
Directory of Open Access Journals (Sweden)
Miroslava Dolejšová
2015-01-01
Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.
Simulation models for tokamak plasmas
International Nuclear Information System (INIS)
Dimits, A.M.; Cohen, B.I.
1992-01-01
Two developments in the nonlinear simulation of tokamak plasmas are described: (A) Simulation algorithms that use quasiballooning coordinates have been implemented in a 3D fluid code and a 3D partially linearized (Δf) particle code. In quasiballooning coordinates, one of the coordinate directions is closely aligned with that of the magnetic field, allowing both optimal use of the grid resolution for structures highly elongated along the magnetic field as well as implementation of the correct periodicity conditions with no discontinuities in the toroidal direction. (B) Progress on the implementation of a likeparticle collision operator suitable for use in partially linearized particle codes is reported. The binary collision approach is shown to be unusable for this purpose. The algorithm under development is a complete version of the test-particle plus source-field approach that was suggested and partially implemented by Xu and Rosenbluth
Simulated full-waveform lidar compared to Riegl VZ-400 terrestrial laser scans
Kim, Angela M.; Olsen, Richard C.; Béland, Martin
2016-05-01
A 3-D Monte Carlo ray-tracing simulation of LiDAR propagation models the reflection, transmission and ab- sorption interactions of laser energy with materials in a simulated scene. In this presentation, a model scene consisting of a single Victorian Boxwood (Pittosporum undulatum) tree is generated by the high-fidelity tree voxel model VoxLAD using high-spatial resolution point cloud data from a Riegl VZ-400 terrestrial laser scanner. The VoxLAD model uses terrestrial LiDAR scanner data to determine Leaf Area Density (LAD) measurements for small volume voxels (20 cm sides) of a single tree canopy. VoxLAD is also used in a non-traditional fashion in this case to generate a voxel model of wood density. Information from the VoxLAD model is used within the LiDAR simulation to determine the probability of LiDAR energy interacting with materials at a given voxel location. The LiDAR simulation is defined to replicate the scanning arrangement of the Riegl VZ-400; the resulting simulated full-waveform LiDAR signals compare favorably to those obtained with the Riegl VZ-400 terrestrial laser scanner.
Modeling and simulation of pressurized water reactor power plant
International Nuclear Information System (INIS)
Wang, S.J.
1983-01-01
Two kinds of balance of plant (BOP) models of a pressurized water reactor (PWR) system are developed in this work - the detailed BOP model and the simple BOP model. The detailed model is used to simulate the normal operational performance of a whole BOP system. The simple model is used to combine with the NSSS model for a whole plant simulation. The trends of the steady state values of the detailed model are correct and the dynamic responses are reasonable. The simple BOP model approach starts the modelling work from the overall point of view. The response of the normalized turbine power and the feedwater inlet temperature to the steam generator of the simple model are compared with those of the detailed model. Both the steady state values and the dynamic responses are close to those of the detailed model. The simple BOP model is found adequate to represent the main performance of the BOP system. The simple balance of plant model was coupled with a NSSS model for a whole plant simulation. The NSSS model consists of the reactor core model, the steam generator model, and the coolant temperature control system. A closed loop whole plant simulation for an electric load perturbation was performed. The results are plausible. The coupling effect between the NSSS system and the BOP system was analyzed. The feedback of the BOP system has little effect on the steam generator performance, while the performance of the BOP system is strongly affected by the steam flow rate from the NSSS
HVDC System Characteristics and Simulation Models
Energy Technology Data Exchange (ETDEWEB)
Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)
2001-07-01
This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.
Physically realistic modeling of maritime training simulation
Cieutat , Jean-Marc
2003-01-01
Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...
Wellness Model of Supervision: A Comparative Analysis
Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.
2012-01-01
This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…
Software-Engineering Process Simulation (SEPS) model
Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.
1992-01-01
The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.
Systematic modelling and simulation of refrigeration systems
DEFF Research Database (Denmark)
Rasmussen, Bjarne D.; Jakobsen, Arne
1998-01-01
The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....
Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement
Directory of Open Access Journals (Sweden)
Esmail Mahmoodi
2015-10-01
Full Text Available In this paper, a generalized Actuator Disc (AD is used to model the wind turbine rotor of the MEXICO experiment, a collaborative European wind turbine project. The AD model as a combination of CFD technique and User Defined Functions codes (UDF, so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests. Distributed force on the blade, thrust and power production of the rotor as important designing parameters of wind turbine rotors are focused to model. A developed Blade Element Momentum (BEM theory as a code based numerical technique as well as a full rotor simulation both from the literature are included into the results to compare and discuss. The output of all techniques is compared to detailed measurements for validation, which led us to final conclusions.
Deriving simulators for hybrid Chi models
Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.
2006-01-01
The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an
Modeling and simulation for RF system design
Frevert, Ronny; Jancke, Roland; Knöchel, Uwe; Schwarz, Peter; Kakerow, Ralf; Darianian, Mohsen
2005-01-01
Focusing on RF specific modeling and simulation methods, and system and circuit level descriptions, this work contains application-oriented training material. Accompanied by a CD- ROM, it combines the presentation of a mixed-signal design flow, an introduction into VHDL-AMS and Verilog-A, and the application of commercially available simulators.
Induction generator models in dynamic simulation tools
DEFF Research Database (Denmark)
Knudsen, Hans; Akhmatov, Vladislav
1999-01-01
For AC network with large amount of induction generators (windmills) the paper demonstrates a significant discrepancy in the simulated voltage recovery after fault in weak networks when comparing dynamic and transient stability descriptions and the reasons of discrepancies are explained...
A View on Future Building System Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Wetter, Michael
2011-04-01
This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described by coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).
Magnetosphere Modeling: From Cartoons to Simulations
Gombosi, T. I.
2017-12-01
Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems
Comparing flood loss models of different complexity
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
NUMERICAL SIMULATION AND MODELING OF UNSTEADY FLOW ...
African Journals Online (AJOL)
2014-06-30
Jun 30, 2014 ... objective of this study is to control the simulation of unsteady flows around structures. ... Aerospace, our results were in good agreement with experimental .... Two-Equation Eddy-Viscosity Turbulence Models for Engineering.
SEIR model simulation for Hepatitis B
Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah
2017-09-01
Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.
Simulation-Based Internal Models for Safer Robots
Directory of Open Access Journals (Sweden)
Christian Blum
2018-01-01
Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.
Maintenance Personnel Performance Simulation (MAPPS) model
International Nuclear Information System (INIS)
Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.
1984-01-01
A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
Turbine modelling for real time simulators
International Nuclear Information System (INIS)
Oliveira Barroso, A.C. de; Araujo Filho, F. de
1992-01-01
A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)
Theory, modeling, and simulation annual report, 1992
Energy Technology Data Exchange (ETDEWEB)
1993-05-01
This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.
Simulating changes to emergency care resources to compare system effectiveness.
Branas, Charles C; Wolff, Catherine S; Williams, Justin; Margolis, Gregg; Carr, Brendan G
2013-08-01
To apply systems optimization methods to simulate and compare the most effective locations for emergency care resources as measured by access to care. This study was an optimization analysis of the locations of trauma centers (TCs), helicopter depots (HDs), and severely injured patients in need of time-critical care in select US states. Access was defined as the percentage of injured patients who could reach a level I/II TC within 45 or 60 minutes. Optimal locations were determined by a search algorithm that considered all candidate sites within a set of existing hospitals and airports in finding the best solutions that maximized access. Across a dozen states, existing access to TCs within 60 minutes ranged from 31.1% to 95.6%, with a mean of 71.5%. Access increased from 0.8% to 35.0% after optimal addition of one or two TCs. Access increased from 1.0% to 15.3% after optimal addition of one or two HDs. Relocation of TCs and HDs (optimal removal followed by optimal addition) produced similar results. Optimal changes to TCs produced greater increases in access to care than optimal changes to HDs although these results varied across states. Systems optimization methods can be used to compare the impacts of different resource configurations and their possible effects on access to care. These methods to determine optimal resource allocation can be applied to many domains, including comparative effectiveness and patient-centered outcomes research. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling and simulation with operator scaling
Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan
2010-01-01
Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...
Modeling of magnetic particle suspensions for simulations
Satoh, Akira
2017-01-01
The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...
Modeling and simulation of discrete event systems
Choi, Byoung Kyu
2013-01-01
Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on
Evaluation of a simulation model for predicting soil-water ...
African Journals Online (AJOL)
The soils particle size distribution (specifically, percent clay and sand) and organic matter contents were inputted into the model to simulate soil moisture status at saturation, field capacity and wilting point, soil bulk density and saturated hydraulic conductivity. The model outputs were statistically compared with observed ...
Hummels, Cameron
Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic
Minimum-complexity helicopter simulation math model
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
International Nuclear Information System (INIS)
Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.
2006-01-01
Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru
Comparative study of void fraction models
International Nuclear Information System (INIS)
Borges, R.C.; Freitas, R.L.
1985-01-01
Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt
Comparing coefficients of nested nonlinear probability models
DEFF Research Database (Denmark)
Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders
2011-01-01
In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...
Comparative Study of Bancruptcy Prediction Models
Directory of Open Access Journals (Sweden)
Isye Arieshanti
2013-09-01
Full Text Available Early indication of bancruptcy is important for a company. If companies aware of potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%
Teles, V.; de Marsily, G.; Delay, F.; Perrier, E.
Alluvial floodplains are extremely heterogeneous aquifers, whose three-dimensional structures are quite difficult to model. In general, when representing such structures, the medium heterogeneity is modeled with classical geostatistical or Boolean meth- ods. Another approach, still in its infancy, is called the genetic method because it simulates the generation of the medium by reproducing sedimentary processes. We developed a new genetic model to obtain a realistic three-dimensional image of allu- vial media. It does not simulate the hydrodynamics of sedimentation but uses semi- empirical and statistical rules to roughly reproduce fluvial deposition and erosion. The main processes, either at the stream scale or at the plain scale, are modeled by simple rules applied to "sediment" entities or to conceptual "erosion" entities. The model was applied to a several kilometer long portion of the Aube River floodplain (France) and reproduced the deposition and erosion cycles that occurred during the inferred climate periods (15 000 BP to present). A three-dimensional image of the aquifer was gener- ated, by extrapolating the two-dimensional information collected on a cross-section of the floodplain. Unlike geostatistical methods, this extrapolation does not use a statis- tical spatial analysis of the data, but a genetic analysis, which leads to a more realistic structure. Groundwater flow and transport simulations in the alluvium were carried out with a three-dimensional flow code or simulator (MODFLOW), using different rep- resentations of the alluvial reservoir of the Aube River floodplain: first an equivalent homogeneous medium, and then different heterogeneous media built either with the traditional geostatistical approach simulating the permeability distribution, or with the new genetic model presented here simulating sediment facies. In the latter case, each deposited entity of a given lithology was assigned a constant hydraulic conductivity value. Results of these
Sunspot Modeling: From Simplified Models to Radiative MHD Simulations
Directory of Open Access Journals (Sweden)
Rolf Schlichenmaier
2011-09-01
Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.
Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag
2018-04-01
Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.
Comparative Study of Aircraft Boarding Strategies Using Cellular Discrete Event Simulation
Directory of Open Access Journals (Sweden)
Shafagh Jafer
2017-11-01
Full Text Available Time is crucial in the airlines industry. Among all factors contributing to an aircraft turnaround time; passenger boarding delays is the most challenging one. Airlines do not have control over the behavior of passengers; thus, focusing their effort on reducing passenger boarding time through implementing efficient boarding strategies. In this work, we attempt to use cellular Discrete-Event System Specification (Cell-DEVS modeling and simulation to provide a comprehensive evaluation of aircraft boarding strategies. We have developed a simulation benchmark consisting of eight boarding strategies including Back-to-Front; Window Middle Aisle; Random; Zone Rotate; Reverse Pyramid; Optimal; Optimal Practical; and Efficient. Our simulation models are scalable and adaptive; providing a powerful analysis apparatus for investigating any existing or yet to be discovered boarding strategy. We explain the details of our models and present the results both visually and numerically to evaluate the eight implemented boarding strategies. We also compare our results with other studies that have used different modeling techniques; reporting nearly identical performance results. The simulations revealed that Window Middle Aisle provides the least boarding delay; with a small fraction of time difference compared to the optimal strategy. The results of this work could highly benefit the commercial airlines industry by optimizing and reducing passenger boarding delays.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
Chisaki, Yugo; Nakamura, Nobuhiko; Yano, Yoshitaka
2017-01-01
The purpose of this study was to propose a time-series modeling and simulation (M&S) strategy for probabilistic cost-effective analysis in cancer chemotherapy using a Monte-Carlo method based on data available from the literature. The simulation included the cost for chemotherapy, for pharmaceutical care for adverse events (AEs) and other medical costs. As an application example, we describe the analysis for the comparison of four regimens, cisplatin plus irinotecan, carboplatin plus paclitaxel, cisplatin plus gemcitabine (GP), and cisplatin plus vinorelbine, for advanced non-small cell lung cancer. The factors, drug efficacy explained by overall survival or time to treatment failure, frequency and severity of AEs, utility value of AEs to determine QOL, the drugs' and other medical costs in Japan, were included in the model. The simulation was performed and quality adjusted life years (QALY) and incremental cost-effectiveness ratios (ICER) were calculated. An index, percentage of superiority (%SUP) which is the rate of the increased cost vs. QALY-gained plots within the area of positive QALY-gained and also below some threshold values of the ICER, was calculated as functions of threshold values of the ICER. An M&S process was developed, and for the simulation example, the GP regimen was the most cost-effective, in case of threshold values of the ICER=$70000/year, the %SUP for the GP are more than 50%. We developed an M&S process for probabilistic cost-effective analysis, this method would be useful for decision-making in choosing a cancer chemotherapy regimen in terms of pharmacoeconomic.
Thermal unit availability modeling in a regional simulation model
International Nuclear Information System (INIS)
Yamayee, Z.A.; Port, J.; Robinett, W.
1983-01-01
The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented
Boysen, Guy A; VanBergen, Alexandra
2014-02-01
Dissociative Identity Disorder (DID) has long been surrounded by controversy due to disagreement about its etiology and the validity of its associated phenomena. Researchers have conducted studies comparing people diagnosed with DID and people simulating DID in order to better understand the disorder. The current research presents a systematic review of this DID simulation research. The literature consists of 20 studies and contains several replicated findings. Replicated differences between the groups include symptom presentation, identity presentation, and cognitive processing deficits. Replicated similarities between the groups include interidentity transfer of information as shown by measures of recall, recognition, and priming. Despite some consistent findings, this research literature is hindered by methodological flaws that reduce experimental validity. Copyright © 2013 Elsevier Ltd. All rights reserved.
A comparative study of accelerated tests to simulate atmospheric corrosion
International Nuclear Information System (INIS)
Assis, Sergio Luiz de
2000-01-01
In this study, specimens coated with five organic coating systems were exposed to accelerated tests for periods up to 2000 hours, and also to weathering for two years and six months. The accelerated tests consisted of the salt spray test, according to ASTM B-117; Prohesion (ASTM G 85-98 annex 5A); Prohesion combined with cyclic exposure to UV-A radiation and condensation; 'Prohchuva' a test described by ASTM G 85-98 using a salt spray with composition that simulated the acid rain of Sao Paulo, but one thousand times more concentrated, and 'Prohchuva' combined with cyclic exposure to UV-A radiation and condensation. The coated specimens were exposed with and without incision to expose the substrate. The onset and progress of corrosion at and of the exposed metallic surface, besides coating degradation, were followed by visual observation, and photographs were taken. The coating systems were classified according to the extent of corrosion protection given to the substrate, using a method based on ASTM standards D-610, D-714, D-1654 and D-3359. The rankings of the coatings obtained from accelerated tests and weathering were compared and contrasted with classification of the same systems obtained from literature, for specimens exposed to an industrial atmosphere. Coating degradation was strongly dependent on the test, and could be attributed to differences in test conditions. The best correlation between accelerated test and weathering was found for the test Prohesion alternated with cycles of exposure to UV-A radiation and condensation. (author)
Plasma disruption modeling and simulation
International Nuclear Information System (INIS)
Hassanein, A.
1994-01-01
Disruptions in tokamak reactors are considered a limiting factor to successful operation and reliable design. The behavior of plasma-facing components during a disruption is critical to the overall integrity of the reactor. Erosion of plasma facing-material (PFM) surfaces due to thermal energy dump during the disruption can severely limit the lifetime of these components and thus diminish the economic feasibility of the reactor. A comprehensive understanding of the interplay of various physical processes during a disruption is essential for determining component lifetime and potentially improving the performance of such components. There are three principal stages in modeling the behavior of PFM during a disruption. Initially, the incident plasma particles will deposit their energy directly on the PFM surface, heating it to a very high temperature where ablation occurs. Models for plasma-material interactions have been developed and used to predict material thermal evolution during the disruption. Within a few microseconds after the start of the disruption, enough material is vaporized to intercept most of the incoming plasma particles. Models for plasma-vapor interactions are necessary to predict vapor cloud expansion and hydrodynamics. Continuous heating of the vapor cloud above the material surface by the incident plasma particles will excite, ionize, and cause vapor atoms to emit thermal radiation. Accurate models for radiation transport in the vapor are essential for calculating the net radiated flux to the material surface which determines the final erosion thickness and consequently component lifetime. A comprehensive model that takes into account various stages of plasma-material interaction has been developed and used to predict erosion rates during reactor disruption, as well during induced disruption in laboratory experiments
Modelling and simulating fire tube boiler performance
DEFF Research Database (Denmark)
Sørensen, K.; Condra, T.; Houbak, Niels
2003-01-01
A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....
COMPARISON OF RF CAVITY TRANSPORT MODELS FOR BBU SIMULATIONS
Energy Technology Data Exchange (ETDEWEB)
Ilkyoung Shin,Byung Yunn,Todd Satogata,Shahid Ahmed
2011-03-01
The transverse focusing effect in RF cavities plays a considerable role in beam dynamics for low-energy beamline sections and can contribute to beam breakup (BBU) instability. The purpose of this analysis is to examine RF cavity models in simulation codes which will be used for BBU experiments at Jefferson Lab and improve BBU simulation results. We review two RF cavity models in the simulation codes elegant and TDBBU (a BBU simulation code developed at Jefferson Lab). elegant can include the Rosenzweig-Serafini (R-S) model for the RF focusing effect. Whereas TDBBU uses a model from the code TRANSPORT which considers the adiabatic damping effect, but not the RF focusing effect. Quantitative comparisons are discussed for the CEBAF beamline. We also compare the R-S model with the results from numerical simulations for a CEBAF-type 5-cell superconducting cavity to validate the use of the R-S model as an improved low-energy RF cavity transport model in TDBBU. We have implemented the R-S model in TDBBU. It will improve BBU simulation results to be more matched with analytic calculations and experimental results.
Comparison Of RF Cavity Transport Models For BBU Simulations
International Nuclear Information System (INIS)
Shin, Ilkyoung; Yunn, Byung; Satogata, Todd; Ahmed, Shahid
2011-01-01
The transverse focusing effect in RF cavities plays a considerable role in beam dynamics for low-energy beamline sections and can contribute to beam breakup (BBU) instability. The purpose of this analysis is to examine RF cavity models in simulation codes which will be used for BBU experiments at Jefferson Lab and improve BBU simulation results. We review two RF cavity models in the simulation codes elegant and TDBBU (a BBU simulation code developed at Jefferson Lab). elegant can include the Rosenzweig-Serafini (R-S) model for the RF focusing effect. Whereas TDBBU uses a model from the code TRANSPORT which considers the adiabatic damping effect, but not the RF focusing effect. Quantitative comparisons are discussed for the CEBAF beamline. We also compare the R-S model with the results from numerical simulations for a CEBAF-type 5-cell superconducting cavity to validate the use of the R-S model as an improved low-energy RF cavity transport model in TDBBU. We have implemented the R-S model in TDBBU. It will improve BBU simulation results to be more matched with analytic calculations and experimental results.
A virtual laboratory notebook for simulation models.
Winfield, A J
1998-01-01
In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.
Bridging experiments, models and simulations
DEFF Research Database (Denmark)
Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca
2012-01-01
Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....
Energy Technology Data Exchange (ETDEWEB)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios, E-mail: junhankim@email.arizona.edu [Department of Astronomy and Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
Directory of Open Access Journals (Sweden)
Tong A.-T.
2012-11-01
Full Text Available A method is presented for the simulation of pore flow in granular materials. The numerical model uses a combination of the discrete element method for the solid phase and a novel finite volume formulation for the fluid phase. The solid is modeled as an assembly of spherical particles, where contact interactions are governed by elasto-plastic relations. Incompressible Stokes flow is considered, assuming that inertial forces are small in comparison with viscous forces. Pore geometry and pore connections are defined locally through regular triangulation of spheres, from which a tetrahedral mesh arises. The definition of pore-scale hydraulic conductivities is a key aspect of this model. In this sense, the model is similar to a pore-network model. Permeability measurements on bi-dispersed glass beads are reported and compared with model predictions, validating the definition of local conductivities. Une méthode est présentée pour la simulation de l’écoulement porale dans les matériaux granulaires. Le modèle numérique est basé sur la méthode des éléments discrets pour la phase solide et sur une nouvelle méthode de type volumes finis pour la phase fluide. Le solide est modélisé comme un arrangement de particules sphériques avec des interactions de type élasto-plastique aux contacts. On considère un écoulement de Stokes incompressible en supposant que les forces inertielles sont négligeables par rapport aux forces visqueuses. La géométrie des pores et leur connectivité sont définies sur la base d’une triangulation régulière des sphères qui aboutit à un maillage tétraédrique. La définition des conductivités hydrauliques à l’échelle des pores est un point clef du modèle qui se rapproche sur ce point à des modèles de type pore-network. Des mesures de perméabilités sur des assemblages bi-disperses de billes de verre sont présentées et comparées aux prédictions du modèle ce qui valide la définition des
Directory of Open Access Journals (Sweden)
Lin Wang
2018-01-01
Full Text Available Monte Carlo simulation of light propagation in turbid medium has been studied for years. A number of software packages have been developed to handle with such issue. However, it is hard to compare these simulation packages, especially for tissues with complex heterogeneous structures. Here, we first designed a group of mesh datasets generated by Iso2Mesh software, and used them to cross-validate the accuracy and to evaluate the performance of four Monte Carlo-based simulation packages, including Monte Carlo model of steady-state light transport in multi-layered tissues (MCML, tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIMOS, Molecular Optical Simulation Environment (MOSE, and Mesh-based Monte Carlo (MMC. The performance of each package was evaluated based on the designed mesh datasets. The merits and demerits of each package were also discussed. Comparative results showed that the TIMOS package provided the best performance, which proved to be a reliable, efficient, and stable MC simulation package for users.
Cooper, Andrew P.; Cole, Shaun; Frenk, Carlos S.; Le Bret, Theo; Pontzen, Andrew
2017-08-01
Particle tagging is an efficient, but approximate, technique for using cosmological N-body simulations to model the phase-space evolution of the stellar populations predicted, for example, by a semi-analytic model of galaxy formation. We test the technique developed by Cooper et al. (which we call stings here) by comparing particle tags with stars in a smooth particle hydrodynamic (SPH) simulation. We focus on the spherically averaged density profile of stars accreted from satellite galaxies in a Milky Way (MW)-like system. The stellar profile in the SPH simulation can be recovered accurately by tagging dark matter (DM) particles in the same simulation according to a prescription based on the rank order of particle binding energy. Applying the same prescription to an N-body version of this simulation produces a density profile differing from that of the SPH simulation by ≲10 per cent on average between 1 and 200 kpc. This confirms that particle tagging can provide a faithful and robust approximation to a self-consistent hydrodynamical simulation in this regime (in contradiction to previous claims in the literature). We find only one systematic effect, likely due to the collisionless approximation, namely that massive satellites in the SPH simulation are disrupted somewhat earlier than their collisionless counterparts. In most cases, this makes remarkably little difference to the spherically averaged distribution of their stellar debris. We conclude that, for galaxy formation models that do not predict strong baryonic effects on the present-day DM distribution of MW-like galaxies or their satellites, differences in stellar halo predictions associated with the treatment of star formation and feedback are much more important than those associated with the dynamical limitations of collisionless particle tagging.
Comparative Assessment of Nonlocal Continuum Solvent Models Exhibiting Overscreening
Directory of Open Access Journals (Sweden)
Ren Baihua
2017-01-01
Full Text Available Nonlocal continua have been proposed to offer a more realistic model for the electrostatic response of solutions such as the electrolyte solvents prominent in biology and electrochemistry. In this work, we review three nonlocal models based on the Landau-Ginzburg framework which have been proposed but not directly compared previously, due to different expressions of the nonlocal constitutive relationship. To understand the relationships between these models and the underlying physical insights from which they are derive, we situate these models into a single, unified Landau-Ginzburg framework. One of the models offers the capacity to interpret how temperature changes affect dielectric response, and we note that the variations with temperature are qualitatively reasonable even though predictions at ambient temperatures are not quantitatively in agreement with experiment. Two of these models correctly reproduce overscreening (oscillations between positive and negative polarization charge densities, and we observe small differences between them when we simulate the potential between parallel plates held at constant potential. These computations require reformulating the two models as coupled systems of local partial differential equations (PDEs, and we use spectral methods to discretize both problems. We propose further assessments to discriminate between the models, particularly in regards to establishing boundary conditions and comparing to explicit-solvent molecular dynamics simulations.
MODELLING, SIMULATING AND OPTIMIZING BOILERS
DEFF Research Database (Denmark)
Sørensen, K.; Condra, T.; Houbak, Niels
2003-01-01
, and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...
Advanced training simulator models. Implementation and validation
International Nuclear Information System (INIS)
Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter
2008-01-01
Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
Analytical system dynamics modeling and simulation
Fabien, Brian C
2008-01-01
This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.
Hybrid simulation models of production networks
Kouikoglou, Vassilis S
2001-01-01
This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.
Dynamic modeling and simulation of wind turbines
International Nuclear Information System (INIS)
Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.
2002-01-01
Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator
Accurate lithography simulation model based on convolutional neural networks
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Landscape Modelling and Simulation Using Spatial Data
Directory of Open Access Journals (Sweden)
Amjed Naser Mohsin AL-Hameedawi
2017-08-01
Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.
Quantitative interface models for simulating microstructure evolution
International Nuclear Information System (INIS)
Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.
2004-01-01
To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys
A queuing model for road traffic simulation
International Nuclear Information System (INIS)
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-01-01
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme
Clock error models for simulation and estimation
International Nuclear Information System (INIS)
Meditch, J.S.
1981-10-01
Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction
Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations
O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.
2005-09-01
We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in
A model for plasma discharges simulation in Tokamak devices
International Nuclear Information System (INIS)
Fonseca, Antonio M.M.; Silva, Ruy P. da; Galvao, Ricardo M.O.; Kusnetzov, Yuri; Nascimento, I.C.; Cuevas, Nelson
2001-01-01
In this work, a 'zero-dimensional' model for simulation of discharges in Tokamak machine is presented. The model allows the calculation of the time profiles of important parameters of the discharge. The model was applied to the TCABR Tokamak to study the influence of parameters and physical processes during the discharges. Basically it is constituted of five differential equations: two related to the primary and secondary circuits of the ohmic heating transformer and the other three conservation equations of energy, charge and neutral particles. From the physical model, a computer program has been built with the objective of obtaining the time profiles of plasma current, the current in the primary of the ohmic heating transformer, the electronic temperature, the electronic density and the neutral particle density. It was also possible, with the model, to simulate the effects of gas puffing during the shot. The results of the simulation were compared with the experimental results obtained in the TCABR Tokamak, using hydrogen gas
Simulation Modeling of Software Development Processes
Calavaro, G. F.; Basili, V. R.; Iazeolla, G.
1996-01-01
A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.
Simulation model for electron irradiated IGZO thin film transistors
Dayananda, G. K.; Shantharama Rai, C.; Jayarama, A.; Kim, Hyun Jae
2018-02-01
An efficient drain current simulation model for the electron irradiation effect on the electrical parameters of amorphous In-Ga-Zn-O (IGZO) thin-film transistors is developed. The model is developed based on the specifications such as gate capacitance, channel length, channel width, flat band voltage etc. Electrical parameters of un-irradiated IGZO samples were simulated and compared with the experimental parameters and 1 kGy electron irradiated parameters. The effect of electron irradiation on the IGZO sample was analysed by developing a mathematical model.
Validation of the simulator neutronics model
International Nuclear Information System (INIS)
Gregory, M.V.
1984-01-01
The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need
Analyzing Strategic Business Rules through Simulation Modeling
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
Virtual milk for modelling and simulation of dairy processes.
Munir, M T; Zhang, Y; Yu, W; Wilson, D I; Young, B R
2016-05-01
The modeling of dairy processing using a generic process simulator suffers from shortcomings, given that many simulators do not contain milk components in their component libraries. Recently, pseudo-milk components for a commercial process simulator were proposed for simulation and the current work extends this pseudo-milk concept by studying the effect of both total milk solids and temperature on key physical properties such as thermal conductivity, density, viscosity, and heat capacity. This paper also uses expanded fluid and power law models to predict milk viscosity over the temperature range from 4 to 75°C and develops a succinct regressed model for heat capacity as a function of temperature and fat composition. The pseudo-milk was validated by comparing the simulated and actual values of the physical properties of milk. The milk thermal conductivity, density, viscosity, and heat capacity showed differences of less than 2, 4, 3, and 1.5%, respectively, between the simulated results and actual values. This work extends the capabilities of the previously proposed pseudo-milk and of a process simulator to model dairy processes, processing different types of milk (e.g., whole milk, skim milk, and concentrated milk) with different intrinsic compositions, and to predict correct material and energy balances for dairy processes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
New exploration on TMSR: modelling and simulation
Energy Technology Data Exchange (ETDEWEB)
Si, S.; Chen, Q.; Bei, H.; Zhao, J., E-mail: ssy@snerdi.com.cn [Shanghai Nuclear Engineering Research & Design Inst., Shanghai (China)
2015-07-01
A tightly coupled multi-physics model for MSR (Molten Salt Reactor) system involving the reactor core and the rest of the primary loop has been developed and employed in an in-house developed computer code TANG-MSR. In this paper, the computer code is used to simulate the behavior of steady state operation and transient for our redesigned TMSR. The presented simulation results demonstrate that the models employed in TANG-MSR can capture major physics phenomena in MSR and the redesigned TMSR has excellent performance of safety and sustainability. (author)
Comparative analysis of Goodwin's business cycle models
Antonova, A. O.; Reznik, S.; Todorov, M. D.
2016-10-01
We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.
Comparative Analysis of Disruption Tolerant Network Routing Simulations in the One and NS-3
2017-12-01
The added levels of simulation increase the processing required by a simulation . ns-3’s simulation of other layers of the network stack permits...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3...Thesis 03-23-2016 to 12-15-2017 4. TITLE AND SUBTITLE COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3 5
Kanban simulation model for production process optimization
Directory of Open Access Journals (Sweden)
Golchev Riste
2015-01-01
Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.
Vermont Yankee simulator BOP model upgrade
International Nuclear Information System (INIS)
Alejandro, R.; Udbinac, M.J.
2006-01-01
The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)
A satellite simulator for TRMM PR applied to climate model simulations
Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.
2017-12-01
Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.
An introduction to network modeling and simulation for the practicing engineer
Burbank, Jack; Ward, Jon
2011-01-01
This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.
Modelling, simulation and validation of the industrial robot
Directory of Open Access Journals (Sweden)
Aleksandrov Slobodan Č.
2014-01-01
Full Text Available In this paper, a DH model of industrial robot, with anthropomorphic configuration and five degrees of freedom - Mitsubishi RV2AJ, is developed. The model is verified on the example robot Mitsubishi RV2AJ. In paper detailed represented the complete mathematical model of the robot and the parameters of the programming. On the basis of this model, simulation of robot motion from point to point is performed, as well as the continuous movement of the pre-defined path. Also, programming of industrial robots identical to simulation programs is made, and comparative analysis of real and simulated experiment is shown. In the final section, a detailed analysis of robot motion is described.
Diversity modelling for electrical power system simulation
International Nuclear Information System (INIS)
Sharip, R M; Abu Zarim, M A U A
2013-01-01
This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios
Diversity modelling for electrical power system simulation
Sharip, R. M.; Abu Zarim, M. A. U. A.
2013-12-01
This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios.
Electromagnetic simulations of simple models of ferrite loaded kickers
Zannini, Carlo; Salvant, B; Metral, E; Rumolo, G
2010-01-01
The kickers are major contributors to the CERN SPS beam coupling impedance. As such, they may represent a limitation to increasing the SPS bunch current in the frame of an intensity upgrade of the LHC. In this paper, CST Particle Studio time domain electromagnetic simulations are performed to obtain the longitudinal and transverse impedances/wake potentials of simplified models of ferrite loaded kickers. The simulation results have been successfully compared with some existing analytical expressions. In the transverse plane, the dipolar and quadrupolar contributions to the wake potentials have been estimated from the results of these simulations. For some cases, simulations have also been benchmarked against measurements on PS kickers. It turns out that the large simulated quadrupolar contributions of these kickers could explain both the negative total (dipolar+quadrupolar) horizontal impedance observed in bench measurements and the positive horizontal tune shift measured with the SPS beam.
Simulation modeling and analysis in safety. II
International Nuclear Information System (INIS)
Ayoub, M.A.
1981-01-01
The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)
Comparative Study of the Simulated and Calculation Gantry Angle ...
African Journals Online (AJOL)
Breast irradiation involves a complex geometric and field-matching technique. Simulators are used to obtain the best and accurate patient treatment positioning as well as irradiation geometry for radiation portals. However many centers in developing countries lack this important equipment. The study was designed to ...
Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations
Sung, Christopher Teh Boon
2011-01-01
Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…
A universal simulator for ecological models
DEFF Research Database (Denmark)
Holst, Niels
2013-01-01
Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....
Biological transportation networks: Modeling and simulation
Albi, Giacomo
2015-09-15
We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.
Reproducibility in Computational Neuroscience Models and Simulations
McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.
2016-01-01
Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845
Object Oriented Modelling and Dynamical Simulation
DEFF Research Database (Denmark)
Wagner, Falko Jens; Poulsen, Mikael Zebbelin
1998-01-01
This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...
Advanced feeder control using fast simulation models
Verheijen, O.S.; Op den Camp, O.M.G.C.; Beerkens, R.G.C.; Backx, A.C.P.M.; Huisman, L.; Drummond, C.H.
2005-01-01
For the automatic control of glass quality in glass production, the relation between process variable and product or glass quality and process conditions/process input parameters must be known in detail. So far, detailed 3-D glass melting simulation models were used to predict the effect of process
Modeling and Simulating Virtual Anatomical Humans
Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan
2014-01-01
This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course
A simulation to model position encoding multicrystal PET detectors
Energy Technology Data Exchange (ETDEWEB)
Tsang, G; Moisan, C; Rogers, J G
1995-05-01
We have developed a simulation to model position encoding multicrystal detectors for positron emission tomography. The simulation is designed to treat the interactions of energetic photons in a scintillator, the geometry of the multicrystal array, as well as the propagation and detection of individual scintillation photons. The simulation is tested with a model of the EXACT HR PLUS block detector manufactured by Siemens-CTI. Position and energy responses derived from the simulation are compared to measured ones. Line-spread-functions, for four columns of crystals, are reproduced with an accuracy of {+-}0.5 mm. The crystal-by-crystal photopeak pulse heights and FWHMs are also predicted within a range of {+-}14%, and {sub -6}{sup +9}% respectively. (author). 21 refs., 2 tabs., 7 figs.
Thermohydraulic modeling and simulation of breeder reactors
International Nuclear Information System (INIS)
Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.
1982-01-01
This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed
Turbulence modeling for Francis turbine water passages simulation
International Nuclear Information System (INIS)
Maruzewski, P; Munch, C; Mombelli, H P; Avellan, F; Hayashi, H; Yamaishi, K; Hashii, T; Sugow, Y
2010-01-01
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-ε model, or the standard k-ε model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
Turbulence modeling for Francis turbine water passages simulation
Energy Technology Data Exchange (ETDEWEB)
Maruzewski, P; Munch, C; Mombelli, H P; Avellan, F [Ecole polytechnique federale de Lausanne, Laboratory of Hydraulic Machines Avenue de Cour 33 bis, CH-1007 Lausanne (Switzerland); Hayashi, H; Yamaishi, K; Hashii, T; Sugow, Y, E-mail: pierre.maruzewski@epfl.c [Nippon KOEI Power Systems, 1-22 Doukyu, Aza, Morijyuku, Sukagawa, Fukushima Pref. 962-8508 (Japan)
2010-08-15
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-{epsilon} model, or the standard k-{epsilon} model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
Turbulence modeling for Francis turbine water passages simulation
Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.
2010-08-01
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
Static response of deformable microchannels: a comparative modelling study
Shidhore, Tanmay C.; Christov, Ivan C.
2018-02-01
We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.
Comparing Realistic Subthalamic Nucleus Neuron Models
Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.
2011-06-01
The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.
Modeling Supermassive Black Holes in Cosmological Simulations
Tremmel, Michael
My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.
Numerical simulations of altocumulus with a cloud resolving model
Energy Technology Data Exchange (ETDEWEB)
Liu, S.; Krueger, S.K. [Univ. of Utah, Salt Lake City, UT (United States)
1996-04-01
Altocumulus and altostratus clouds together cover approximately 22% of the earth`s surface. They play an important role in the earth`s energy budget through their effect on solar and infrared radiation. However, there has been little altocumulus cloud investigation by either modelers or observational programs. Starr and Cox (SC) (1985a,b) simulated an altostratus case as part of the same study in which they modeled a thin layer of cirrus. Although this calculation was originally described as representing altostratus, it probably better represents altocumulus stratiformis. In this paper, we simulate altocumulus cloud with a cloud resolving model (CRM). We simply describe the CRM first. We calculate the same middle-level cloud case as SC to compare our results with theirs. We will look at the role of cloud-scale processes in response to large-scale forcing. We will also discuss radiative effects by simulating diurnal and nocturnal cases. Finally, we discuss the utility of a 1D model by comparing 1D simulations and 2D simulations.
On microscopic simulations of systems with model chemical reactions
International Nuclear Information System (INIS)
Gorecki, J.; Gorecka, J.N.
1998-01-01
Large scale computer simulations of model chemical systems play the role of idealized experiments in which theories may be tested. In this paper we present two applications of microscopic simulations based on the reactive hard sphere model. We investigate the influence of internal fluctuations on an oscillating chemical system and observe how they modify the phase portrait of it. Another application, we consider, is concerned with the propagation of a chemical wave front associated with a thermally activated reaction. It is shown that the nonequilibrium effects increase the front velocity if compared with the velocity of the front generated by a nonactivated process characterized by the same rate constant. (author)
Akiyama, S.; Kawaji, K.; Fujihara, S.
2013-12-01
difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.
Dynamic modeling and simulation of a real world billiard
International Nuclear Information System (INIS)
Hartl, Alexandre E.; Miller, Bruce N.; Mazzoleni, Andre P.
2011-01-01
Gravitational billiards provide an experimentally accessible arena for testing formulations of nonlinear dynamics. We present a mathematical model that captures the essential dynamics required for describing the motion of a realistic billiard for arbitrary boundaries. Simulations of the model are applied to parabolic, wedge and hyperbolic billiards that are driven sinusoidally. Direct comparisons are made between the model's predictions and previously published experimental data. It is shown that the data can be successfully modeled with a simple set of parameters without an assumption of exotic energy dependence. -- Highlights: → We create a model of a gravitational billiard that includes rotation and dissipation. → Predictions of the model are compared with the experiments of Felt and Olafsen. → The simulations correctly predict the essential features of the experiments.
comparative study of moore and mealy machine models adaptation
African Journals Online (AJOL)
user
automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.
Mesoscopic modelling and simulation of soft matter.
Schiller, Ulf D; Krüger, Timm; Henrich, Oliver
2017-12-20
The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.
Modeling and Simulation of Claus Unit Reaction Furnace
Directory of Open Access Journals (Sweden)
Maryam Pahlavan
2016-01-01
Full Text Available Reaction furnace is the most important part of the Claus sulfur recovery unit and its performance has a significant impact on the process efficiency. Too many reactions happen in the furnace and their kinetics and mechanisms are not completely understood; therefore, modeling reaction furnace is difficult and several works have been carried out on in this regard so far. Equilibrium models are commonly used to simulate the furnace, but the related literature states that the outlet of furnace is not in equilibrium and the furnace reactions are controlled by kinetic laws; therefore, in this study, the reaction furnace is simulated by a kinetic model. The predicted outlet temperature and concentrations by this model are compared with experimental data published in the literature and the data obtained by PROMAX V2.0 simulator. The results show that the accuracy of the proposed kinetic model and PROMAX simulator is almost similar, but the kinetic model used in this paper has two importance abilities. Firstly, it is a distributed model and can be used to obtain the temperature and concentration profiles along the furnace. Secondly, it is a dynamic model and can be used for analyzing the transient behavior and designing the control system.
An Individual-based Probabilistic Model for Fish Stock Simulation
Directory of Open Access Journals (Sweden)
Federico Buti
2010-08-01
Full Text Available We define an individual-based probabilistic model of a sole (Solea solea behaviour. The individual model is given in terms of an Extended Probabilistic Discrete Timed Automaton (EPDTA, a new formalism that is introduced in the paper and that is shown to be interpretable as a Markov decision process. A given EPDTA model can be probabilistically model-checked by giving a suitable translation into syntax accepted by existing model-checkers. In order to simulate the dynamics of a given population of soles in different environmental scenarios, an agent-based simulation environment is defined in which each agent implements the behaviour of the given EPDTA model. By varying the probabilities and the characteristic functions embedded in the EPDTA model it is possible to represent different scenarios and to tune the model itself by comparing the results of the simulations with real data about the sole stock in the North Adriatic sea, available from the recent project SoleMon. The simulator is presented and made available for its adaptation to other species.
Modeling, simulation and optimization of bipedal walking
Berns, Karsten
2013-01-01
The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...
Multiphase reacting flows modelling and simulation
Marchisio, Daniele L
2007-01-01
The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...
Advancing Material Models for Automotive Forming Simulations
International Nuclear Information System (INIS)
Vegter, H.; An, Y.; Horn, C.H.L.J. ten; Atzema, E.H.; Roelofsen, M.E.
2005-01-01
Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path.The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary.Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials.Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations prior
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
Modelling and simulation of thermal power plants
Energy Technology Data Exchange (ETDEWEB)
Eborn, J.
1998-02-01
Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Validity of microgravity simulation models on earth
DEFF Research Database (Denmark)
Regnard, J; Heer, M; Drummer, C
2001-01-01
Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect...... incomplete knowledge of the characteristics inherent to each model. During water immersion, the hydrostatic pressure lowers the peripheral vascular capacity and causes increased thoracic blood volume and high vascular perfusion. In turn, these changes lead to high urinary flow, low vasomotor tone, and a high...
Kemp, Chandler E; Ravikumar, Arvind P; Brandt, Adam R
2016-04-19
We present a tool for modeling the performance of methane leak detection and repair programs that can be used to evaluate the effectiveness of detection technologies and proposed mitigation policies. The tool uses a two-state Markov model to simulate the evolution of methane leakage from an artificial natural gas field. Leaks are created stochastically, drawing from the current understanding of the frequency and size distributions at production facilities. Various leak detection and repair programs can be simulated to determine the rate at which each would identify and repair leaks. Integrating the methane leakage over time enables a meaningful comparison between technologies, using both economic and environmental metrics. We simulate four existing or proposed detection technologies: flame ionization detection, manual infrared camera, automated infrared drone, and distributed detectors. Comparing these four technologies, we found that over 80% of simulated leakage could be mitigated with a positive net present value, although the maximum benefit is realized by selectively targeting larger leaks. Our results show that low-cost leak detection programs can rely on high-cost technology, as long as it is applied in a way that allows for rapid detection of large leaks. Any strategy to reduce leakage should require a careful consideration of the differences between low-cost technologies and low-cost programs.
Mathematical models and numerical simulation in electromagnetism
Bermúdez, Alfredo; Salgado, Pilar
2014-01-01
The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.
Modeling and simulation of economic processes
Directory of Open Access Journals (Sweden)
Bogdan Brumar
2010-12-01
Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.
Simulation as a surgical teaching model.
Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos
2018-01-01
Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Comparing measured with simulated vertical soil stress under vehicle load
DEFF Research Database (Denmark)
Keller, Thomas; Lamandé, Mathieu; Arvidsson, Johan
The load transfer within agricultural soil is typically modelled on the basis of the theory of stress transmission in elastic media, usually in the semi-empirical form that includes the “concentration factor” (v). Measurements of stress in soil are needed to evaluate model calculations, but may...
Modeling and simulation of photovoltaic solar panel
International Nuclear Information System (INIS)
Belarbi, M.; Haddouche, K.; Midoun, A.
2006-01-01
In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit, open circuit, and maximum power. In the first step, the adopted approach concerns the resolution of the system of equations constituting the three operating points to write all the model parameters according to series resistance. Secondly, we make an iterative resolution at the optimal operating point by using the Newton-Raphson method to calculate the series resistance value as well as the model parameters. Once the panel model is identified, we consider other equations for taking into account the irradiance and temperature effect. The simulation results show the convergence speed of the model parameters and the possibility of visualizing the electrical behaviour of the panel according to the irradiance and temperature. Let us note that a sensitivity of the algorithm at the optimal operating point was observed owing to the fact that a small variation of the optimal voltage value leads to a very great variation of the identified parameters values. With the identified model, we can develop algorithms of maximum power point tracking, and make simulations of a solar water pumping system.(Author)
Facebook's personal page modelling and simulation
Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.
2015-02-01
In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.
A simulation model for material accounting systems
International Nuclear Information System (INIS)
Coulter, C.A.; Thomas, K.E.
1987-01-01
A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line
Theory, modeling and simulation: Annual report 1993
Energy Technology Data Exchange (ETDEWEB)
Dunning, T.H. Jr.; Garrett, B.C.
1994-07-01
Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.
Theory, modeling and simulation: Annual report 1993
International Nuclear Information System (INIS)
Dunning, T.H. Jr.; Garrett, B.C.
1994-07-01
Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies
Variable slip wind generator modeling for real-time simulation
Energy Technology Data Exchange (ETDEWEB)
Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ
2006-07-01
A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.
NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN
Directory of Open Access Journals (Sweden)
Petr Chmátal
2016-04-01
Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.
eShopper modeling and simulation
Petrushin, Valery A.
2001-03-01
The advent of e-commerce gives an opportunity to shift the paradigm of customer communication into a highly interactive mode. The new generation of commercial Web servers, such as the Blue Martini's server, combines the collection of data on a customer behavior with real-time processing and dynamic tailoring of a feedback page. The new opportunities for direct product marketing and cross selling are arriving. The key problem is what kind of information do we need to achieve these goals, or in other words, how do we model the customer? The paper is devoted to customer modeling and simulation. The focus is on modeling an individual customer. The model is based on the customer's transaction data, click stream data, and demographics. The model includes the hierarchical profile of a customer's preferences to different types of products and brands; consumption models for the different types of products; the current focus, trends, and stochastic models for time intervals between purchases; product affinity models; and some generalized features, such as purchasing power, sensitivity to advertising, price sensitivity, etc. This type of model is used for predicting the date of the next visit, overall spending, and spending for different types of products and brands. For some type of stores (for example, a supermarket) and stable customers, it is possible to forecast the shopping lists rather accurately. The forecasting techniques are discussed. The forecasting results can be used for on- line direct marketing, customer retention, and inventory management. The customer model can also be used as a generative model for simulating the customer's purchasing behavior in different situations and for estimating customer's features.
Acoustic performance of industrial mufflers with CAE modeling and simulation
Directory of Open Access Journals (Sweden)
Jeon Soohong
2014-12-01
Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.
Acoustic performance of industrial mufflers with CAE modeling and simulation
Directory of Open Access Journals (Sweden)
Soohong Jeon
2014-12-01
Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive ele- ments, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparing headphone and speaker effects on simulated driving.
Nelson, T M; Nilsson, T H
1990-12-01
Twelve persons drove for three hours in an automobile simulator while listening to music at sound level 63dB over stereo headphones during one session and from a dashboard speaker during another session. They were required to steer a mountain highway, maintain a certain indicated speed, shift gears, and respond to occasional hazards. Steering and speed control were dependent on visual cues. The need to shift and the hazards were indicated by sound and vibration effects. With the headphones, the driver's average reaction time for the most complex task presented--shifting gears--was about one-third second longer than with the speaker. The use of headphones did not delay the development of subjective fatigue.
A collision model in plasma particle simulations
International Nuclear Information System (INIS)
Ma Yanyun; Chang Wenwei; Yin Yan; Yue Zongwu; Cao Lihua; Liu Daqing
2000-01-01
In order to offset the collisional effects reduced by using finite-size particles, β particle clouds are used in particle simulation codes (β is the ratio of charge or mass of modeling particles to real ones). The method of impulse approximation (strait line orbit approximation) is used to analyze the scattering cross section of β particle clouds plasmas. The authors can obtain the relation of the value of a and β and scattering cross section (a is the radius of β particle cloud). By using this relation the authors can determine the value of a and β so that the collisional effects of the modeling system is correspondent with the real one. The authors can also adjust the values of a and β so that the authors can enhance or reduce the collisional effects fictitiously. The results of simulation are in good agreement with the theoretical ones
Macro Level Simulation Model Of Space Shuttle Processing
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
High-Fidelity Roadway Modeling and Simulation
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Difficulties with True Interoperability in Modeling & Simulation
2011-12-01
Standards in M&S cover multiple layers of technical abstraction. There are middleware specifica- tions, such as the High Level Architecture (HLA) ( IEEE Xplore ... IEEE Xplore Digital Library. 2010. 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Framework and Rules...using different communication protocols being able to allow da- 2642978-1-4577-2109-0/11/$26.00 ©2011 IEEE Report Documentation Page Form ApprovedOMB No
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...
Bayesian models for comparative analysis integrating phylogenetic uncertainty
Directory of Open Access Journals (Sweden)
Villemereuil Pierre de
2012-06-01
Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for
Modelling interplanetary CMEs using magnetohydrodynamic simulations
Directory of Open Access Journals (Sweden)
P. J. Cargill
Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.
Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies
Interactive Modelling and Simulation of Human Motion
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol
menneskers led, der udviser både ikke-konveksitet og flere frihedsgrader • En generel og alsidig model for aktivering af bløde legemer. Modellen kan anvendes som et animations værktøj, men er lige så velegnet til simulering af menneskelige muskler, da den opfylder de grundlæggende fysiske principper......Dansk resumé Denne ph.d.-afhandling beskæftiger sig med modellering og simulation af menneskelig bevægelse. Emnerne i denne afhandling har mindst to ting til fælles. For det første beskæftiger de sig med menneskelig bevægelse. Selv om de udviklede modeller også kan benyttes til andre ting,er det...... primære fokus på at modellere den menneskelige krop. For det andet, beskæftiger de sig alle med simulering som et redskab til at syntetisere bevægelse og dermed skabe animationer. Dette er en vigtigt pointe, da det betyder, at vi ikke kun skaber værktøjer til animatorer, som de kan bruge til at lave sjove...
MODELING AND SIMULATION OF A HYDROCRACKING UNIT
Directory of Open Access Journals (Sweden)
HASSAN A. FARAG
2016-06-01
Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.
On Improving 4-km Mesoscale Model Simulations
Deng, Aijun; Stauffer, David R.
2006-03-01
A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6
Validating clustering of molecular dynamics simulations using polymer models
Directory of Open Access Journals (Sweden)
Phillips Joshua L
2011-11-01
Full Text Available Abstract Background Molecular dynamics (MD simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our
Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.
2011-01-01
This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…
Reactive transport models and simulation with ALLIANCES
International Nuclear Information System (INIS)
Leterrier, N.; Deville, E.; Bary, B.; Trotignon, L.; Hedde, T.; Cochepin, B.; Stora, E.
2009-01-01
Many chemical processes influence the evolution of nuclear waste storage. As a result, simulations based only upon transport and hydraulic processes fail to describe adequately some industrial scenarios. We need to take into account complex chemical models (mass action laws, kinetics...) which are highly non-linear. In order to simulate the coupling of these chemical reactions with transport, we use a classical Sequential Iterative Approach (SIA), with a fixed point algorithm, within the mainframe of the ALLIANCES platform. This approach allows us to use the various transport and chemical modules available in ALLIANCES, via an operator-splitting method based upon the structure of the chemical system. We present five different applications of reactive transport simulations in the context of nuclear waste storage: 1. A 2D simulation of the lixiviation by rain water of an underground polluted zone high in uranium oxide; 2. The degradation of the steel envelope of a package in contact with clay. Corrosion of the steel creates corrosion products and the altered package becomes a porous medium. We follow the degradation front through kinetic reactions and the coupling with transport; 3. The degradation of a cement-based material by the injection of an aqueous solution of zinc and sulphate ions. In addition to the reactive transport coupling, we take into account in this case the hydraulic retroaction of the porosity variation on the Darcy velocity; 4. The decalcification of a concrete beam in an underground storage structure. In this case, in addition to the reactive transport simulation, we take into account the interaction between chemical degradation and the mechanical forces (cracks...), and the retroactive influence on the structure changes on transport; 5. The degradation of the steel envelope of a package in contact with a clay material under a temperature gradient. In this case the reactive transport simulation is entirely directed by the temperature changes and
Modeling, simulation, and experiments of coating growth on nanofibers
International Nuclear Information System (INIS)
Clemons, C. B.; Hamrick, P.; Heminger, J.; Kreider, K. L.; Young, G. W.; Buldum, A.; Evans, E.; Zhang, G.
2008-01-01
This work is a comparison of modeling and simulation results with experiments for an integrated experimental/modeling investigation of a procedure to coat nanofibers and core-clad nanostructures with thin film materials using plasma enhanced physical vapor deposition. In the experimental effort, electrospun polymer nanofibers are coated with metallic materials under different operating conditions to observe changes in the coating morphology. The modeling effort focuses on linking simple models at the reactor level, nanofiber level and atomic level to form a comprehensive model. The comprehensive model leads to the definition of an evolution equation for the coating free surface around an isolated nanofiber. This evolution equation was previously derived and solved under conditions of a nearly circular coating, with a concentration field that was only radially dependent and that was independent of the location of the coating free surface. These assumptions permitted the development of analytical expressions for the concentration field. The present work does not impose the above-mentioned conditions and considers numerical simulations of the concentration field that couple with level set simulations of the evolution equation for the coating free surface. Further, the cases of coating an isolated fiber as well as a multiple fiber mat are considered. Simulation results are compared with experimental results as the reactor pressure and power, as well as the nanofiber mat porosity, are varied
Computer Models Simulate Fine Particle Dispersion
2010-01-01
Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.
Consolidation modelling for thermoplastic composites forming simulation
Xiong, H.; Rusanov, A.; Hamila, N.; Boisse, P.
2016-10-01
Pre-impregnated thermoplastic composites are widely used in the aerospace industry for their excellent mechanical properties, Thermoforming thermoplastic prepregs is a fast manufacturing process, the automotive industry has shown increasing interest in this manufacturing processes, in which the reconsolidation is an essential stage. The model of intimate contact is investigated as the consolidation model, compression experiments have been launched to identify the material parameters, several numerical tests show the influents of the temperature and pressure applied during processing. Finally, a new solid-shell prismatic element has been presented for the simulation of consolidation step in the thermoplastic composites forming process.
Quantification of uncertainties of modeling and simulation
International Nuclear Information System (INIS)
Ma Zhibo; Yin Jianwei
2012-01-01
The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)
Simulation models generator. Applications in scheduling
Directory of Open Access Journals (Sweden)
Omar Danilo Castrillón
2013-08-01
Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building
Modeling and simulation of reactive flows
Bortoli, De AL; Pereira, Felipe
2015-01-01
Modelling and Simulation of Reactive Flows presents information on modeling and how to numerically solve reactive flows. The book offers a distinctive approach that combines diffusion flames and geochemical flow problems, providing users with a comprehensive resource that bridges the gap for scientists, engineers, and the industry. Specifically, the book looks at the basic concepts related to reaction rates, chemical kinetics, and the development of reduced kinetic mechanisms. It considers the most common methods used in practical situations, along with equations for reactive flows, and va
Directory of Open Access Journals (Sweden)
Juan P. Pérez Monsalve
2014-12-01
Full Text Available This work analyzed the relationship of the two main Price indicators in the Colombian economy, the IPP and the IPC. For this purpose, we identified the theory comprising both indexes to then develop a vector autoregressive model, which shows the reaction to shocks both in itself as in the other variable, whose impact continues propagating in the long term. Additionally, the work presents a simulation of the VAR model through the Monte Carlo method, verifying the coincidence in distributions of probability and volatility levels, as well as the existence correlation over time
Comparative simulation of Stirling and Sibling cycle cryocoolers with two codes
International Nuclear Information System (INIS)
Mitchell, M.P.; Wilson, K.J.; Bauwens, L.
1989-01-01
The authors present a comparative analysis of Stirling and Sibling Cycle cryocoolers conducted with two different computer simulation codes. One code (CRYOWEISS) performs an initial analysis on the assumption of isothermal conditions in the machines and adjusts that result with decoupled loss calculations. The other code (MS*2) models fluid flows and heat transfers more realistically but ignores significant loss mechanisms, including flow friction and heat conduction through the metal of the machines. Surprisingly, MS*2 is less optimistic about performance of all machines even though it ignores losses that are modelled by CRYOWEISS. Comparison between constant-bore Stirling and Sibling machines shows that their performance is generally comparable over a range of temperatures, pressures and operating speeds. No machine was consistently superior or inferior according to both codes over the whole range of conditions studied
Simulated CONUS Flash Flood Climatologies from Distributed Hydrologic Models
Flamig, Z.; Gourley, J. J.; Vergara, H. J.; Kirstetter, P. E.; Hong, Y.
2016-12-01
This study will describe a CONUS flash flood climatology created over the period from 2002 through 2011. The MRMS reanalysis precipitation dataset was used as forcing into the Ensemble Framework For Flash Flood Forecasting (EF5). This high resolution 1-sq km 5-minute dataset is ideal for simulating flash floods with a distributed hydrologic model. EF5 features multiple water balance components including SAC-SMA, CREST, and a hydrophobic model all coupled with kinematic wave routing. The EF5/SAC-SMA and EF5/CREST water balance schemes were used for the creation of dual flash flood climatologies based on the differing water balance principles. For the period from 2002 through 2011 the daily maximum streamflow, unit streamflow, and time of peak streamflow was stored along with the minimum soil moisture. These variables are used to describe the states of the soils right before a flash flood event and the peak streamflow that was simulated during the flash flood event. The results will be shown, compared and contrasted. The resulting model simulations will be verified on basins less than 1,000-sq km with USGS gauges to ensure the distributed hydrologic models are reliable. The results will also be compared spatially to Storm Data flash flood event observations to judge the degree of agreement between the simulated climatologies and observations.
Hydrodynamic modeling of petroleum reservoirs using simulator MUFITS
Afanasyev, Andrey
2015-04-01
MUFITS is new noncommercial software for numerical modeling of subsurface processes in various applications (www.mufits.imec.msu.ru). To this point, the simulator was used for modeling nonisothermal flows in geothermal reservoirs and for modeling underground carbon dioxide storage. In this work, we present recent extension of the code to petroleum reservoirs. The simulator can be applied in conventional black oil modeling, but it also utilizes a more complicated models for volatile oil and gas condensate reservoirs as well as for oil rim fields. We give a brief overview of the code by providing the description of internal representation of reservoir models, which are constructed of grid blocks, interfaces, stock tanks as well as of pipe segments and pipe junctions for modeling wells and surface networks. For conventional black oil approach, we present the simulation results for SPE comparative tests. We propose an accelerated compositional modeling method for sub- and supercritical flows subjected to various phase equilibria, particularly to three-phase equilibria of vapour-liquid-liquid type. The method is based on the calculation of the thermodynamic potential of reservoir fluid as a function of pressure, total enthalpy and total composition and storing its values as a spline table, which is used in hydrodynamic simulation for accelerated PVT properties prediction. We provide the description of both the spline calculation procedure and the flashing algorithm. We evaluate the thermodynamic potential for a mixture of two pseudo-components modeling the heavy and light hydrocarbon fractions. We develop a technique for converting black oil PVT tables to the potential, which can be used for in-situ hydrocarbons multiphase equilibria prediction under sub- and supercritical conditions, particularly, in gas condensate and volatile oil reservoirs. We simulate recovery from a reservoir subject to near-critical initial conditions for hydrocarbon mixture. We acknowledge
TMS modeling toolbox for realistic simulation.
Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong
2010-01-01
Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.
Zaidi, H
1999-01-01
the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...
Biomedical Simulation Models of Human Auditory Processes
Bicak, Mehmet M. A.
2012-01-01
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
COMPARATIVE STUDY OF TERTIARY WASTEWATER TREATMENT BY COMPUTER SIMULATION
Stefania Iordache; Nicolae Petrescu; Cornel Ianache
2010-01-01
The aim of this work is to asses conditions for implementation of a Biological Nutrient Removal (BNR) process in theWastewater Treatment Plant (WWTP) of Moreni city (Romania). In order to meet the more increased environmentalregulations, the wastewater treatment plant that was studied, must update the actual treatment process and have tomodernize it. A comparative study was undertaken of the quality of effluents that could be obtained by implementationof biological nutrient removal process li...
Material model validation for laser shock peening process simulation
International Nuclear Information System (INIS)
Amarchinta, H K; Grandhi, R V; Langer, K; Stargel, D S
2009-01-01
Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 10 6 s −1 , which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic–plastic behavior of materials. Elastic perfectly plastic, Johnson–Cook and Zerilli–Armstrong models are used, and the performance of each model is compared with available experimental results
Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yun [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China); Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY (United States); Liu, Yinhe, E-mail: yinheliu@mail.xjtu.edu.cn [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China)
2017-11-20
Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C{sub hydrogen} < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C{sub hydrogen} > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.
Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation
International Nuclear Information System (INIS)
Zhang, Yun; Liu, Yinhe
2017-01-01
Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C hydrogen < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C hydrogen > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.
Coarse grained model for semiquantitative lipid simulations
Marrink, SJ; de Vries, AH; Mark, AE
2004-01-01
This paper describes the parametrization of a new coarse grained (CG) model for lipid and surfactant systems. Reduction of the number of degrees of freedom together with the use of short range potentials makes it computationally very efficient. Compared to atomistic models a gain of 3-4 orders of
Modeling and simulation of gamma camera
International Nuclear Information System (INIS)
Singh, B.; Kataria, S.K.; Samuel, A.M.
2002-08-01
Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced
A Comparative Study on the Refueling Simulation Method for a CANDU Reactor
Energy Technology Data Exchange (ETDEWEB)
Do, Quang Binh; Choi, Hang Bok; Roh, Gyu Hong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
2006-07-01
The Canada deuterium uranium (CANDU) reactor calculation is typically performed by the RFSP code to obtain the power distribution upon a refueling. In order to assess the equilibrium behavior of the CANDU reactor, a few methods were suggested for a selection of the refueling channel. For example, an automatic refueling channel selection method (AUTOREFUEL) and a deterministic method (GENOVA) were developed, which were based on a reactor's operation experience and the generalized perturbation theory, respectively. Both programs were designed to keep the zone controller unit (ZCU) water level within a reasonable range during a continuous refueling simulation. However, a global optimization of the refueling simulation, that includes constraints on the discharge burn-up, maximum channel power (MCP), maximum bundle power (MBP), channel power peaking factor (CPPF) and the ZCU water level, was not achieved. In this study, an evolutionary algorithm, which is indeed a hybrid method based on the genetic algorithm, the elitism strategy and the heuristic rules for a multi-cycle and multi-objective optimization of the refueling simulation has been developed for the CANDU reactor. This paper presents the optimization model of the genetic algorithm and compares the results with those obtained by other simulation methods.
Water Hammer Modelling and Simulation by GIS
Directory of Open Access Journals (Sweden)
K. Hariri Asli
2012-01-01
Full Text Available This work defined an Eulerian-based computational model compared with regression of the relationship between the dependent and independent variables for water hammer surge wave in transmission pipeline. The work also mentioned control of Unaccounted-for-Water (UFW based on the Geography Information System (GIS for water transmission pipeline. The experimental results of laboratory model and the field test results showed the validity of prediction achieved by computational model.
Simulation of MILD combustion using Perfectly Stirred Reactor model
Chen, Z.
2016-07-06
A simple model based on a Perfectly Stirred Reactor (PSR) is proposed for moderate or intense low-oxygen dilution (MILD) combustion. The PSR calculation is performed covering the entire flammability range and the tabulated chemistry approach is used with a presumed joint probability density function (PDF). The jet, in hot and diluted coflow experimental set-up under MILD conditions, is simulated using this reactor model for two oxygen dilution levels. The computed results for mean temperature, major and minor species mass fractions are compared with the experimental data and simulation results obtained recently using a multi-environment transported PDF approach. Overall, a good agreement is observed at three different axial locations for these comparisons despite the over-predicted peak value of CO formation. This suggests that MILD combustion can be effectively modelled by the proposed PSR model with lower computational cost.
Aero-Acoustic Modelling using Large Eddy Simulation
International Nuclear Information System (INIS)
Shen, W Z; Soerensen, J N
2007-01-01
The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data
Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications
Directory of Open Access Journals (Sweden)
Peter Smolek
2018-06-01
Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.
Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis
Bradley, James R.
2012-01-01
This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.
Systematic simulations of modified gravity: chameleon models
International Nuclear Information System (INIS)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo
2013-01-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future
Systematic simulations of modified gravity: chameleon models
Energy Technology Data Exchange (ETDEWEB)
Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.
Circuit simulation model multi-quantum well laser diodes inducing transport and capture/escape
International Nuclear Information System (INIS)
Zhuber-Okrog, K.
1996-04-01
This work describes the development of world's first circuit simulation model for multi-quantum well (MQW) semiconductor lasers comprising caier transport and capture/escape effects. This model can be seen as the application of a new semiconductor device simulator for quasineutral structures including MQW layers with an extension for simple single mode modeling of optical behavior. It is implemented in a circuit simulation program. The model is applied to Fabry-Perot laser diodes and compared to measured data. (author)
Lap weld joint modelling and simulation of welding in programme SYSWELD
Directory of Open Access Journals (Sweden)
Koňár Radoslav
2018-01-01
Full Text Available Simulations of the welding process for applications of practice using SYSWELD are presented. This paper presents simulation of welding in the repair of high-pressure gas pipeline with steel sleeve with composite filling. Material of experimental sample was steel S355. The simulations in SYSWELD divided in to two parts: the thermal simulation followed by the mechanical simulation. The results of the numerical model, which are listed in article are compared to real experiments.
Modeling And Simulation Of Multimedia Communication Networks
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
Dinucleotide controlled null models for comparative RNA gene prediction
Directory of Open Access Journals (Sweden)
Gesell Tanja
2008-05-01
Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require
Dinucleotide controlled null models for comparative RNA gene prediction.
Gesell, Tanja; Washietl, Stefan
2008-05-27
Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz
Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency
Aikens, Kurt; Craft, Kyle; Redman, Andrew
2015-11-01
The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
A Comparative Study of Simulated and Measured Gear-Flap Flow Interaction
Khorrami, Mehdi R.; Mineck, Raymond E.; Yao, Chungsheng; Jenkins, Luther N.; Fares, Ehab
2015-01-01
The ability of two CFD solvers to accurately characterize the transient, complex, interacting flowfield asso-ciated with a realistic gear-flap configuration is assessed via comparison of simulated flow with experimental measurements. The simulated results, obtained with NASA's FUN3D and Exa's PowerFLOW® for a high-fidelity, 18% scale semi-span model of a Gulfstream aircraft in landing configuration (39 deg flap deflection, main landing gear on and off) are compared to two-dimensional and stereo particle image velocimetry measurements taken within the gear-flap flow interaction region during wind tunnel tests of the model. As part of the bench-marking process, direct comparisons of the mean and fluctuating velocity fields are presented in the form of planar contour plots and extracted line profiles at measurement planes in various orientations stationed in the main gear wake. The measurement planes in the vicinity of the flap side edge and downstream of the flap trailing edge are used to highlight the effects of gear presence on tip vortex development and the ability of the computational tools to accurately capture such effects. The present study indicates that both computed datasets contain enough detail to construct a relatively accurate depiction of gear-flap flow interaction. Such a finding increases confidence in using the simulated volumetric flow solutions to examine the behavior of pertinent aer-odynamic mechanisms within the gear-flap interaction zone.
Tokamak Simulation Code modeling of NSTX
International Nuclear Information System (INIS)
Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.
2000-01-01
The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption
A Lookahead Behavior Model for Multi-Agent Hybrid Simulation
Directory of Open Access Journals (Sweden)
Mei Yang
2017-10-01
Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.
Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons
Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.
Comparing holographic dark energy models with statefinder
International Nuclear Information System (INIS)
Cui, Jing-Lei; Zhang, Jing-Fei
2014-01-01
We apply the statefinder diagnostic to the holographic dark energy models, including the original holographic dark energy (HDE) model, the new holographic dark energy model, the new agegraphic dark energy (NADE) model, and the Ricci dark energy model. In the low-redshift region the holographic dark energy models are degenerate with each other and with the ΛCDM model in the H(z) and q(z) evolutions. In particular, the HDE model is highly degenerate with the ΛCDM model, and in the HDE model the cases with different parameter values are also in strong degeneracy. Since the observational data are mainly within the low-redshift region, it is very important to break this lowredshift degeneracy in the H(z) and q(z) diagnostics by using some quantities with higher order derivatives of the scale factor. It is shown that the statefinder diagnostic r(z) is very useful in breaking the low-redshift degeneracies. By employing the statefinder diagnostic the holographic dark energy models can be differentiated efficiently in the low-redshift region. The degeneracy between the holographic dark energy models and the ΛCDM model can also be broken by this method. Especially for the HDE model, all the previous strong degeneracies appearing in the H(z) and q(z) diagnostics are broken effectively. But for the NADE model, the degeneracy between the cases with different parameter values cannot be broken, even though the statefinder diagnostic is used. A direct comparison of the holographic dark energy models in the r-s plane is also made, in which the separations between the models (including the ΛCDM model) can be directly measured in the light of the current values {r 0 , s 0 } of the models. (orig.)
Simulation of root forms using cellular automata model
International Nuclear Information System (INIS)
Winarno, Nanang; Prima, Eka Cahya; Afifah, Ratih Mega Ayu
2016-01-01
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations
Simulation of root forms using cellular automata model
Energy Technology Data Exchange (ETDEWEB)
Winarno, Nanang, E-mail: nanang-winarno@upi.edu; Prima, Eka Cahya [International Program on Science Education, Universitas Pendidikan Indonesia, Jl. Dr. Setiabudi no 229, Bandung40154 (Indonesia); Afifah, Ratih Mega Ayu [Department of Physics Education, Post Graduate School, Universitas Pendidikan Indonesia, Jl. Dr. Setiabudi no 229, Bandung40154 (Indonesia)
2016-02-08
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.
Simulations, evaluations and models. Vol. 1
International Nuclear Information System (INIS)
Brehmer, B.; Leplat, J.
1992-01-01
Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)
Process model simulations of the divergence effect
Anchukaitis, K. J.; Evans, M. N.; D'Arrigo, R. D.; Smerdon, J. E.; Hughes, M. K.; Kaplan, A.; Vaganov, E. A.
2007-12-01
We explore the extent to which the Vaganov-Shashkin (VS) model of conifer tree-ring formation can explain evidence for changing relationships between climate and tree growth over recent decades. The VS model is driven by daily environmental forcing (temperature, soil moisture, and solar radiation), and simulates tree-ring growth cell-by-cell as a function of the most limiting environmental control. This simplified representation of tree physiology allows us to examine using a selection of case studies whether instances of divergence may be explained in terms of changes in limiting environmental dependencies or transient climate change. Identification of model-data differences permits further exploration of the effects of tree-ring standardization, atmospheric composition, and additional non-climatic factors.
Traffic flow dynamics data, models and simulation
Treiber, Martin
2013-01-01
This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on ...
Biomechanics trends in modeling and simulation
Ogden, Ray
2017-01-01
The book presents a state-of-the-art overview of biomechanical and mechanobiological modeling and simulation of soft biological tissues. Seven well-known scientists working in that particular field discuss topics such as biomolecules, networks and cells as well as failure, multi-scale, agent-based, bio-chemo-mechanical and finite element models appropriate for computational analysis. Applications include arteries, the heart, vascular stents and valve implants as well as adipose, brain, collagenous and engineered tissues. The mechanics of the whole cell and sub-cellular components as well as the extracellular matrix structure and mechanotransduction are described. In particular, the formation and remodeling of stress fibers, cytoskeletal contractility, cell adhesion and the mechanical regulation of fibroblast migration in healing myocardial infarcts are discussed. The essential ingredients of continuum mechanics are provided. Constitutive models of fiber-reinforced materials with an emphasis on arterial walls ...
Qualitative simulation in formal process modelling
International Nuclear Information System (INIS)
Sivertsen, Elin R.
1999-01-01
In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)
Bias-Correction in Vector Autoregressive Models: A Simulation Study
Directory of Open Access Journals (Sweden)
Tom Engsted
2014-03-01
Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.
Traffic flow dynamics. Data, models and simulation
Energy Technology Data Exchange (ETDEWEB)
Treiber, Martin [Technische Univ. Dresden (Germany). Inst. fuer Wirtschaft und Verkehr; Kesting, Arne [TomTom Development Germany GmbH, Berlin (Germany)
2013-07-01
First comprehensive textbook of this fascinating interdisciplinary topic which explains advances in a way that it is easily accessible to engineering, physics and math students. Presents practical applications of traffic theory such as driving behavior, stability analysis, stop-and-go waves, and travel time estimation. Presents the topic in a novel and systematic way by addressing both microscopic and macroscopic models with a focus on traffic instabilities. Revised and extended edition of the German textbook ''Verkehrsdynamik und -simulation''. This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on traffic instabilities and model calibration/validation present these topics in a novel and systematic way. Finally, the theoretical framework is shown at work in selected applications such as traffic-state and travel-time estimation, intelligent transportation systems, traffic operations management, and a detailed physics-based model for fuel consumption and emissions.
cellGPU: Massively parallel simulations of dynamic vertex models
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
Modeling, Simulation and Position Control of 3 Degree of Freedom Articulated Manipulator
Directory of Open Access Journals (Sweden)
Hossein Sadegh Lafmejani
2013-09-01
Full Text Available In this paper, the modeling, simulation and control of 3 degree of freedom articulated robotic manipulator have been studied. First, we extracted kinematics and dynamics equations of the mentioned manipulator by using the Lagrange method. In order to validate the analytical model of the manipulator we compared the model simulated in the simulation environment of Matlab with the model was simulated with the SimMechanics toolbox. A sample path has been designed for analyzing the tracking subject. The system has been linearized with feedback linearization and then a PID controller was applied to track a reference trajectory. Finally, the control results have been compared with a nonlinear PID controller.
Simulation Model of Mobile Detection Systems
International Nuclear Information System (INIS)
Edmunds, T.; Faissol, D.; Yao, Y.
2009-01-01
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains
Simulation Model of Mobile Detection Systems
Energy Technology Data Exchange (ETDEWEB)
Edmunds, T; Faissol, D; Yao, Y
2009-01-27
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains
CASTOR detector. Model, objectives and simulated performance
International Nuclear Information System (INIS)
Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D.; Aslanoglou, X.; Nicolis, N.; Lobanov, M.; Erine, S.; Kharlov, Y. V.; Bogolyubsky, M. Y.; Kurepin, A. B.; Chileev, K.; Wlodarczyk, Z.
2001-01-01
It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented
An Agent-Based Monetary Production Simulation Model
DEFF Research Database (Denmark)
Bruun, Charlotte
2006-01-01
An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...
Simulation model for port shunting yards
Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.
2016-08-01
Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.
Traffic simulation based ship collision probability modeling
Energy Technology Data Exchange (ETDEWEB)
Goerlandt, Floris, E-mail: floris.goerlandt@tkk.f [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland); Kujala, Pentti [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland)
2011-01-15
Maritime traffic poses various risks in terms of human, environmental and economic loss. In a risk analysis of ship collisions, it is important to get a reasonable estimate for the probability of such accidents and the consequences they lead to. In this paper, a method is proposed to assess the probability of vessels colliding with each other. The method is capable of determining the expected number of accidents, the locations where and the time when they are most likely to occur, while providing input for models concerned with the expected consequences. At the basis of the collision detection algorithm lays an extensive time domain micro-simulation of vessel traffic in the given area. The Monte Carlo simulation technique is applied to obtain a meaningful prediction of the relevant factors of the collision events. Data obtained through the Automatic Identification System is analyzed in detail to obtain realistic input data for the traffic simulation: traffic routes, the number of vessels on each route, the ship departure times, main dimensions and sailing speed. The results obtained by the proposed method for the studied case of the Gulf of Finland are presented, showing reasonable agreement with registered accident and near-miss data.
Modeling VOC transport in simulated waste drums
International Nuclear Information System (INIS)
Liekhus, K.J.; Gresham, G.L.; Peterson, E.S.; Rae, C.; Hotz, N.J.; Connolly, M.J.
1993-06-01
A volatile organic compound (VOC) transport model has been developed to describe unsteady-state VOC permeation and diffusion within a waste drum. Model equations account for three primary mechanisms for VOC transport from a void volume within the drum. These mechanisms are VOC permeation across a polymer boundary, VOC diffusion across an opening in a volume boundary, and VOC solubilization in a polymer boundary. A series of lab-scale experiments was performed in which the VOC concentration was measured in simulated waste drums under different conditions. A lab-scale simulated waste drum consisted of a sized-down 55-gal metal drum containing a modified rigid polyethylene drum liner. Four polyethylene bags were sealed inside a large polyethylene bag, supported by a wire cage, and placed inside the drum liner. The small bags were filled with VOC-air gas mixture and the VOC concentration was measured throughout the drum over a period of time. Test variables included the type of VOC-air gas mixtures introduced into the small bags, the small bag closure type, and the presence or absence of a variable external heat source. Model results were calculated for those trials where the VOC permeability had been measured. Permeabilities for five VOCs [methylene chloride, 1,1,2-trichloro-1,2,2-trifluoroethane (Freon-113), 1,1,1-trichloroethane, carbon tetrachloride, and trichloroethylene] were measured across a polyethylene bag. Comparison of model and experimental results of VOC concentration as a function of time indicate that model accurately accounts for significant VOC transport mechanisms in a lab-scale waste drum
Modeling and simulation of milk emulsion drying in spray dryers
Directory of Open Access Journals (Sweden)
V. S. Birchal
2005-06-01
Full Text Available This work aims at modeling and simulating the drying of whole milk emulsion in spray dryers. Drops and particles make up the discrete phase and are distributed into temporal compartments following their residence time in the dryer. Air is the continuous and well-mixed phase. Mass and energy balances are developed for each phase, taking into account their interactions. Constitutive equations for describing the drop swelling and drying mechanisms as well as the heat and mass transfer between particles and hot air are proposed and analyzed. A set of algebraic-differential equations is obtained and solved by specific numerical codes. Results from experiments carried out in a pilot spray dryer are used to validate the model developed and the numerical algorithm. Comparing the simulated and experimental data, it is shown that the model predicts well the individual drop-particle history inside the dryer as well as the overall outlet air-particle temperature and humidity.
International Nuclear Information System (INIS)
Zerbino, H.
1999-01-01
In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
Construction and simulation of a novel continuous traffic flow model
International Nuclear Information System (INIS)
Hwang, Yao-Hsin; Yu, Jui-Ling
2017-01-01
In this paper, we aim to propose a novel mathematical model for traffic flow and apply a newly developed characteristic particle method to solve the associate governing equations. As compared with the existing non-equilibrium higher-order traffic flow models, the present one is put forward to satisfy the following three conditions: 1.Preserve the equilibrium state in the smooth region. 2.Yield an anisotropic propagation of traffic flow information. 3.Expressed with a conservation law form for traffic momentum. These conditions will ensure a more practical simulation in traffic flow physics: The current traffic will not be influenced by the condition in the behind and result in unambiguous condition across a traffic shock. Through analyses of characteristics, stability condition and steady-state solution adherent to the equation system, it is shown that the proposed model actually conform to these conditions. Furthermore, this model can be cast into its characteristic form which, incorporated with the Rankine-Hugoniot relation, is appropriate to be simulated by the characteristic particle method to obtain accurate computational results. - Highlights: • The traffic model expressed with the momentum conservation law. • Traffic flow information propagate anisotropically and preserve the equilibrium state in the smooth region. • Computational particles of two families are invented to mimic forward-running and backward-running characteristics. • Formation of shocks will be naturally detected by the intersection of computational particles of same family. • A newly developed characteristic particle method is used to simulate traffic flow model equations.
A Comparative Study of Control Methods for a Robotic Manipulator with Six DOF in Simulation
Directory of Open Access Journals (Sweden)
Smyrnaiou Georgia P.
2017-01-01
Full Text Available In this paper a comparative study of the classical control methods for the testing of a mathematical model, which controls six actuators of a six degrees of freedom robotic arm with a single controller, is illustrated, aiming to the constructive simplification of the system. In more detail, a mathematical model of the system is designed which simulates all mechanical parts, including 5-way directional pneumatic valve, the pneumatic actuators/pistons and the mathematical model of the controller. The purpose of the above is the tuning of a Single Input, Multiple Output (SIMO controller which will direct the motion of the six pneumatic pistons. The thorough analysis of the implementation of the pneumatic system in Matlab/Simulink environment is followed by experimentation and results using Proportional (P, Proportional-Integral (PI, Proportional-Derivative (PD and Proportional-Integral-Derivative (PID controllers. The simulation results show the advantages of the above classical control methods on the robotic human arm which imitating human motion and made by a well-known company in the field of pneumatic automation.
Comparative Analysis of Investment Decision Models
Directory of Open Access Journals (Sweden)
Ieva Kekytė
2017-06-01
Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.
Heavy truck modeling for fuel consumption. Simulations and measurements
Energy Technology Data Exchange (ETDEWEB)
Sandberg, T.
2001-12-01
Fuel consumption for heavy trucks depends on many factors like roads, weather, and driver behavior that are hard for a manufacturer to influence. However, one design possibility is the power train configuration. Here a new simulation program for heavy trucks is created to find the configuration of the power train that gives the lowest fuel consumption for each transport task. For efficient simulations the model uses production code for speed and gear control, and it uses exchangeable data sets to allow simulation of the whole production range of engine types, on recorded road profiles from all over the world. Combined with a graphical user interface this application is called STARS (Scania Truck And Road Simulation). The forces of rolling resistance and air resistance in the model are validated through an experiment where the propeller shaft torque of a heavy truck is measured. It is found that the coefficient of rolling resistance is strongly dependent on tire temperature, not only on vehicle speed as expected. This led to the development of a new model for rolling resistance. The model includes the dynamic behavior of the tires and relates rolling resistance to tire temperature and vehicle speed. In another experiment the fuel consumption of a test truck in highway driving is measured. The altitude of the road is recorded with a barometer and used in the corresponding simulations. Despite of the limited accuracy of this equipment the simulation program manage to predict a level of fuel consumption only 2% lower than the real measurements. It is concluded that STARS is a good tool for predicting fuel consumption for trucks in highway driving and for comparing different power train configurations.
Modeling and visual simulation of Microalgae photobioreactor
Zhao, Ming; Hou, Dapeng; Hu, Dawei
Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.
Molecular models and simulations of layered materials
International Nuclear Information System (INIS)
Kalinichev, Andrey G.; Cygan, Randall Timothy; Heinz, Hendrik; Greathouse, Jeffery A.
2008-01-01
The micro- to nano-sized nature of layered materials, particularly characteristic of naturally occurring clay minerals, limits our ability to fully interrogate their atomic dispositions and crystal structures. The low symmetry, multicomponent compositions, defects, and disorder phenomena of clays and related phases necessitate the use of molecular models and modern simulation methods. Computational chemistry tools based on classical force fields and quantum-chemical methods of electronic structure calculations provide a practical approach to evaluate structure and dynamics of the materials on an atomic scale. Combined with classical energy minimization, molecular dynamics, and Monte Carlo techniques, quantum methods provide accurate models of layered materials such as clay minerals, layered double hydroxides, and clay-polymer nanocomposites
At the biological modeling and simulation frontier.
Hunt, C Anthony; Ropella, Glen E P; Lam, Tai Ning; Tang, Jonathan; Kim, Sean H J; Engelberg, Jesse A; Sheikh-Bahaei, Shahab
2009-11-01
We provide a rationale for and describe examples of synthetic modeling and simulation (M&S) of biological systems. We explain how synthetic methods are distinct from familiar inductive methods. Synthetic M&S is a means to better understand the mechanisms that generate normal and disease-related phenomena observed in research, and how compounds of interest interact with them to alter phenomena. An objective is to build better, working hypotheses of plausible mechanisms. A synthetic model is an extant hypothesis: execution produces an observable mechanism and phenomena. Mobile objects representing compounds carry information enabling components to distinguish between them and react accordingly when different compounds are studied simultaneously. We argue that the familiar inductive approaches contribute to the general inefficiencies being experienced by pharmaceutical R&D, and that use of synthetic approaches accelerates and improves R&D decision-making and thus the drug development process. A reason is that synthetic models encourage and facilitate abductive scientific reasoning, a primary means of knowledge creation and creative cognition. When synthetic models are executed, we observe different aspects of knowledge in action from different perspectives. These models can be tuned to reflect differences in experimental conditions and individuals, making translational research more concrete while moving us closer to personalized medicine.
Plasma simulation studies using multilevel physics models
International Nuclear Information System (INIS)
Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.
1999-01-01
The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics
Plasma simulation studies using multilevel physics models
International Nuclear Information System (INIS)
Park, W.; Belova, E.V.; Fu, G.Y.
2000-01-01
The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future
Modeling lift operations with SASmacr Simulation Studio
Kar, Leow Soo
2016-10-01
Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.
A particle based simulation model for glacier dynamics
Directory of Open Access Journals (Sweden)
J. A. Åström
2013-10-01
Full Text Available A particle-based computer simulation model was developed for investigating the dynamics of glaciers. In the model, large ice bodies are made of discrete elastic particles which are bound together by massless elastic beams. These beams can break, which induces brittle behaviour. At loads below fracture, beams may also break and reform with small probabilities to incorporate slowly deforming viscous behaviour in the model. This model has the advantage that it can simulate important physical processes such as ice calving and fracturing in a more realistic way than traditional continuum models. For benchmarking purposes the deformation of an ice block on a slip-free surface was compared to that of a similar block simulated with a Finite Element full-Stokes continuum model. Two simulations were performed: (1 calving of an ice block partially supported in water, similar to a grounded marine glacier terminus, and (2 fracturing of an ice block on an inclined plane of varying basal friction, which could represent transition to fast flow or surging. Despite several approximations, including restriction to two-dimensions and simplified water-ice interaction, the model was able to reproduce the size distributions of the debris observed in calving, which may be approximated by universal scaling laws. On a moderate slope, a large ice block was stable and quiescent as long as there was enough of friction against the substrate. For a critical length of frictional contact, global sliding began, and the model block disintegrated in a manner suggestive of a surging glacier. In this case the fragment size distribution produced was typical of a grinding process.
A Monte Carlo Simulation Framework for Testing Cosmological Models
Directory of Open Access Journals (Sweden)
Heymann Y.
2014-10-01
Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.
Simulation as a vehicle for enhancing collaborative practice models.
Jeffries, Pamela R; McNelis, Angela M; Wheeler, Corinne A
2008-12-01
Clinical simulation used in a collaborative practice approach is a powerful tool to prepare health care providers for shared responsibility for patient care. Clinical simulations are being used increasingly in professional curricula to prepare providers for quality practice. Little is known, however, about how these simulations can be used to foster collaborative practice across disciplines. This article provides an overview of what simulation is, what collaborative practice models are, and how to set up a model using simulations. An example of a collaborative practice model is presented, and nursing implications of using a collaborative practice model in simulations are discussed.
Modeling and numerical simulations of the influenced Sznajd model
Karan, Farshad Salimi Naneh; Srinivasan, Aravinda Ramakrishnan; Chakraborty, Subhadeep
2017-08-01
This paper investigates the effects of independent nonconformists or influencers on the behavioral dynamic of a population of agents interacting with each other based on the Sznajd model. The system is modeled on a complete graph using the master equation. The acquired equation has been numerically solved. Accuracy of the mathematical model and its corresponding assumptions have been validated by numerical simulations. Regions of initial magnetization have been found from where the system converges to one of two unique steady-state PDFs, depending on the distribution of influencers. The scaling property and entropy of the stationary system in presence of varying level of influence have been presented and discussed.
Ghanbarian, Behzad; Berg, Carl F.
2017-09-01
Accurate quantification of formation resistivity factor F (also called formation factor) provides useful insight into connectivity and pore space topology in fully saturated porous media. In particular the formation factor has been extensively used to estimate permeability in reservoir rocks. One of the widely applied models to estimate F is Archie's law (F = ϕ- m in which ϕ is total porosity and m is cementation exponent) that is known to be valid in rocks with negligible clay content, such as clean sandstones. In this study we compare formation factors determined by percolation and effective-medium theories as well as Archie's law with numerical simulations of electrical resistivity on digital rock models. These digital models represent Bentheimer and Fontainebleau sandstones and are derived either by reconstruction or directly from micro-tomographic images. Results show that the universal quadratic power law from percolation theory accurately estimates the calculated formation factor values in network models over the entire range of porosity. However, it crosses over to the linear scaling from the effective-medium approximation at the porosity of 0.75 in grid models. We also show that the effect of critical porosity, disregarded in Archie's law, is nontrivial, and the Archie model inaccurately estimates the formation factor in low-porosity homogeneous sandstones.
Haji, Umran; Pryor, Carlton; Applebaum, Elaad; Brooks, Alyson
2018-01-01
We compare the orbital properties of the satellite galaxies of the Milky Way to those of satellites found in simulated Milky Way-like systems as a means of testing cosmological simulations of galaxy formation. The particular problem that we are investigating is a discrepancy in the distribution of orbital eccentricities. Previous studies of Milky Way-mass systems analyzed in a semi-analytic ΛCDM cosmological model have found that the satellites tend to have significantly larger fractions of their kinetic energy invested in radial motion with respect to their central galaxy than do the real-world Milky Way satellites. We analyze several high-resolution ("zoom-in") hydrodynamical simulations of Milky Way-mass galaxies and their associated satellite systems to investigate why previous works found Milky Way-like systems to be rare. We find a possible relationship between a quiescent galactic assembly history and a distribution of satellite kinematics resembling that of the Milky Way. This project has been supported by funding from National Science Foundation grant PHY-1560077.
Energy Technology Data Exchange (ETDEWEB)
Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)
2010-07-01
Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)
Calibration and simulation of Heston model
Directory of Open Access Journals (Sweden)
Mrázek Milan
2017-05-01
Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.
Banaszek, Daniel; You, Daniel; Chang, Justues; Pickell, Michael; Hesse, Daniel; Hopman, Wilma M; Borschneck, Daniel; Bardana, Davide
2017-04-05
Work-hour restrictions as set forth by the Accreditation Council for Graduate Medical Education (ACGME) and other governing bodies have forced training programs to seek out new learning tools to accelerate acquisition of both medical skills and knowledge. As a result, competency-based training has become an important part of residency training. The purpose of this study was to directly compare arthroscopic skill acquisition in both high-fidelity and low-fidelity simulator models and to assess skill transfer from either modality to a cadaveric specimen, simulating intraoperative conditions. Forty surgical novices (pre-clerkship-level medical students) voluntarily participated in this trial. Baseline demographic data, as well as data on arthroscopic knowledge and skill, were collected prior to training. Subjects were randomized to 5-week independent training sessions on a high-fidelity virtual reality arthroscopic simulator or on a bench-top arthroscopic setup, or to an untrained control group. Post-training, subjects were asked to perform a diagnostic arthroscopy on both simulators and in a simulated intraoperative environment on a cadaveric knee. A more difficult surprise task was also incorporated to evaluate skill transfer. Subjects were evaluated using the Global Rating Scale (GRS), the 14-point arthroscopic checklist, and a timer to determine procedural efficiency (time per task). Secondary outcomes focused on objective measures of virtual reality simulator motion analysis. Trainees on both simulators demonstrated a significant improvement (p virtual reality simulation group consistently outperformed the bench-top model group in the diagnostic arthroscopy crossover tests and in the simulated cadaveric setup. Furthermore, the virtual reality group demonstrated superior skill transfer in the surprise skill transfer task. Both high-fidelity and low-fidelity simulation trainings were effective in arthroscopic skill acquisition. High-fidelity virtual reality
SAPS simulation with GITM/UCLA-RCM coupled model
Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.
2017-12-01
Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.
Modeling for Stellar Feedback in Galaxy Formation Simulations
Núñez, Alejandro; Ostriker, Jeremiah P.; Naab, Thorsten; Oser, Ludwig; Hu, Chia-Yu; Choi, Ena
2017-02-01
Various heuristic approaches to model unresolved supernova (SN) feedback in galaxy formation simulations exist to reproduce the formation of spiral galaxies and the overall inefficient conversion of gas into stars. Some models, however, require resolution-dependent scalings. We present a subresolution model representing the three major phases of supernova blast wave evolution—free expansion, energy-conserving Sedov-Taylor, and momentum-conserving snowplow—with energy scalings adopted from high-resolution interstellar-medium simulations in both uniform and multiphase media. We allow for the effects of significantly enhanced SN remnant propagation in a multiphase medium with the cooling radius scaling with the hot volume fraction, {f}{hot}, as {(1-{f}{hot})}-4/5. We also include winds from young massive stars and AGB stars, Strömgren sphere gas heating by massive stars, and a mechanism that limits gas cooling that is driven by radiative recombination of dense H II regions. We present initial tests for isolated Milky Way-like systems simulated with the Gadget-based code SPHgal with improved SPH prescription. Compared to pure thermal SN input, the model significantly suppresses star formation at early epochs, with star formation extended both in time and space in better accord with observations. Compared to models with pure thermal SN feedback, the age at which half the stellar mass is assembled increases by a factor of 2.4, and the mass-loading parameter and gas outflow rate from the galactic disk increase by a factor of 2. Simulation results are converged for a variation of two orders of magnitude in particle mass in the range (1.3-130) × 104 solar masses.
Comparing soil moisture memory in satellite observations and models
Stacke, Tobias; Hagemann, Stefan; Loew, Alexander
2013-04-01
A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the
On purpose simulation model for molten salt CSP parabolic trough
Caranese, Carlo; Matino, Francesca; Maccari, Augusto
2017-06-01
The utilization of computer codes and simulation software is one of the fundamental aspects for the development of any kind of technology and, in particular, in CSP sector for researchers, energy institutions, EPC and others stakeholders. In that extent, several models for the simulation of CSP plant have been developed with different main objectives (dynamic simulation, productivity analysis, techno economic optimization, etc.), each of which has shown its own validity and suitability. Some of those models have been designed to study several plant configurations taking into account different CSP plant technologies (Parabolic trough, Linear Fresnel, Solar Tower or Dish) and different settings for the heat transfer fluid, the thermal storage systems and for the overall plant operating logic. Due to a lack of direct experience of Molten Salt Parabolic Trough (MSPT) commercial plant operation, most of the simulation tools do not foresee a suitable management of the thermal energy storage logic and of the solar field freeze protection system, but follow standard schemes. ASSALT, Ase Software for SALT csp plants, has been developed to improve MSPT plant's simulations, by exploiting the most correct operational strategies in order to provide more accurate technical and economical results. In particular, ASSALT applies MSPT specific control logics for the electric energy production and delivery strategy as well as the operation modes of the Solar Field in off-normal sunshine condition. With this approach, the estimated plant efficiency is increased and the electricity consumptions required for the plant operation and management is drastically reduced. Here we present a first comparative study on a real case 55 MWe Molten Salt Parabolic Trough CSP plant placed in the Tibetan highlands, using ASSALT and SAM (System Advisor Model), which is a commercially available simulation tool.
Application of Bond Graph Modeling for Photovoltaic Module Simulation
Directory of Open Access Journals (Sweden)
Madi S.
2016-01-01
Full Text Available In this paper, photovoltaic generator is represented using the bond-graph methodology. Starting from the equivalent circuit the bond graph and the block diagram of the photovoltaic generator have been derived. Upon applying bond graph elements and rules a mathematical model of the photovoltaic generator is obtained. Simulation results of this obtained model using real recorded data (irradiation and temperature at the Renewable Energies Development Centre in Bouzaréah – Algeria are obtained using MATLAB/SMULINK software. The results have compared with datasheet of the photovoltaic generator for validation purposes.
Modelling and Simulation of Volume Controlled Mechanical Ventilation System
Directory of Open Access Journals (Sweden)
Yan Shi
2014-01-01
Full Text Available Volume controlled mechanical ventilation system is a typical time-delay system, which is applied to ventilate patients who cannot breathe adequately on their own. To illustrate the influences of key parameters of the ventilator on the dynamics of the ventilated respiratory system, this paper firstly derived a new mathematical model of the ventilation system; secondly, simulation and experimental results are compared to verify the mathematical model; lastly, the influences of key parameters of ventilator on the dynamics of the ventilated respiratory system are carried out. This study can be helpful in the VCV ventilation treatment and respiratory diagnostics.
A Simulation Model for Measuring Customer Satisfaction through Employee Satisfaction
Zondiros, Dimitris; Konstantopoulos, Nikolaos; Tomaras, Petros
2007-12-01
Customer satisfaction is defined as a measure of how a firm's product or service performs compared to customer's expectations. It has long been a subject of research due to its importance for measuring marketing and business performance. A lot of models have been developed for its measurement. This paper propose a simulation model using employee satisfaction as one of the most important factors leading to customer satisfaction (the others being expectations and disconfirmation of expectations). Data obtained from a two-year survey in customers of banks in Greece were used. The application of three approaches regarding employee satisfaction resulted in greater customer satisfaction when there is serious effort to keep employees satisfied.
Tecnomatix Plant Simulation modeling and programming by means of examples
Bangsow, Steffen
2015-01-01
This book systematically introduces the development of simulation models as well as the implementation and evaluation of simulation experiments with Tecnomatix Plant Simulation. It deals with all users of Plant Simulation, who have more complex tasks to handle. It also looks for an easy entry into the program. Particular attention has been paid to introduce the simulation flow language SimTalk and its use in various areas of the simulation. The author demonstrates with over 200 examples how to combine the blocks for simulation models and how to deal with SimTalk for complex control and analys
Nonlinear distortion in wireless systems modeling and simulation with Matlab
Gharaibeh, Khaled M
2011-01-01
This book covers the principles of modeling and simulation of nonlinear distortion in wireless communication systems with MATLAB simulations and techniques In this book, the author describes the principles of modeling and simulation of nonlinear distortion in single and multichannel wireless communication systems using both deterministic and stochastic signals. Models and simulation methods of nonlinear amplifiers explain in detail how to analyze and evaluate the performance of data communication links under nonlinear amplification. The book addresses the analysis of nonlinear systems
Comparing models of offensive cyber operations
CSIR Research Space (South Africa)
Grant, T
2012-03-01
Full Text Available Group Fallback only No Damballa, 2008 Crime Case studies Lone No No Owens et al, 2009 Warfare Literature Group Yes Yes Croom, 2010 Crime (APT) Case studies Group No No Dreijer, 2011 Warfare Previous models and case studies Group Yes No Van... be needed by a geographically or functionally distributed group of attackers. While some of the models describe the installation of a backdoor or an advanced persistent threat (APT), none of them describe the behaviour involved in returning to a...
Equivalence of two models in single-phase multicomponent flow simulations
Wu, Yuanqing
2016-02-28
In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.
Equivalence of two models in single-phase multicomponent flow simulations
Wu, Yuanqing; Sun, Shuyu
2016-01-01
In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Climate simulations for 1880-2003 with GISS modelE
International Nuclear Information System (INIS)
Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.
2007-01-01
We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)
Multiple Time Series Ising Model for Financial Market Simulations
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2015-01-01
In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated
Cognitive Modeling for Agent-Based Simulation of Child Maltreatment
Hu, Xiaolin; Puddy, Richard
This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.
Comparing models of offensive cyber operations
CSIR Research Space (South Africa)
Grant, T
2015-10-01
Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...
Simulation of styrene polymerization reactors: kinetic and thermodynamic modeling
Directory of Open Access Journals (Sweden)
A. S. Almeida
2008-06-01
Full Text Available A mathematical model for the free radical polymerization of styrene is developed to predict the steady-state and dynamic behavior of a continuous process. Special emphasis is given for the kinetic and thermodynamic models, where the most sensitive parameters were estimated using data from an industrial plant. The thermodynamic model is based on a cubic equation of state and a mixing rule applied to the low-pressure vapor-liquid equilibrium of polymeric solutions, suitable for modeling the auto-refrigerated polymerization reactors, which use the vaporization rate to remove the reaction heat from the exothermic reactions. The simulation results show the high predictive capability of the proposed model when compared with plant data for conversion, average molecular weights, polydispersity, melt flow index, and thermal properties for different polymer grades.
Directory of Open Access Journals (Sweden)
Maria Isabel Suero
2011-10-01
Full Text Available This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output. This new virtual environment concept, which we call hyper-realistic, transcends basic schematic simulation; it provides the user with a more realistic perception of a physical phenomenon being simulated. We compared the learning achievements of three equivalent, homogeneous groups of undergraduates—an experimental group who used only the hyper-realistic virtual laboratory, a first control group who used a schematic simulation, and a second control group who used the traditional laboratory. The three groups received the same theoretical preparation and carried out equivalent practicals in their respective learning environments. The topic chosen for the experiment was optical aberrations. An analysis of variance applied to the data of the study demonstrated a statistically significant difference (p value <0.05 between the three groups. The learning achievements attained by the group using the hyper-realistic virtual environment were 6.1 percentage points higher than those for the group using the traditional schematic simulations and 9.5 percentage points higher than those for the group using the traditional laboratory.
Student measurement of blood pressure using a simulator arm compared with a live subject's arm.
Lee, Jennifer J; Sobieraj, Diana M; Kuti, Effie L
2010-06-15
To compare accuracy of blood pressure measurements using a live subject and a simulator arm, and to determine students' preferences regarding measurement. This was a crossover study comparing blood pressure measurements from a live subject and a simulator arm. Students completed an anonymous survey instrument defining opinions on ease of measurement. Fifty-seven students completed blood pressure measurements on live subjects while 72 students completed blood pressure measurements using the simulator arm. There were no significant systematic differences between the 2 measurement techniques. Systolic blood pressure measurements from a live subject arm were less likely to be within 4 mm Hg compared with measurements of a simulator arm. Diastolic blood pressure measurements were not significantly different between the 2 techniques. Accuracy of student measurement of blood pressure using a simulator arm was similar to the accuracy with a live subject. There was no difference in students' preferences regarding measurement techniques.
Fracture network modeling and GoldSim simulation support
International Nuclear Information System (INIS)
Sugita, Kenichiro; Dershowitz, William
2003-01-01
During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA). MIU Underground Rock Laboratory support during H-14 involved discrete fracture network (DFN) modelling in support of the Multiple Modelling Project (MMP) and the Long Term Pumping Test (LPT). Golder developed updated DFN models for the MIU site, reflecting updated analyses of fracture data. Golder also developed scripts to support JNC simulations of flow and transport pathways within the MMP. Golder supported JNC participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-14. Task 6A and 6B compared safety assessment (PA) and experimental time scale simulations along a pipe transport pathway. Task 6B2 extended Task 6B simulations from 1-D to 2-D. For Task 6B2, Golder carried out single fracture transport simulations on a wide variety of generic heterogeneous 2D fractures using both experimental and safety assessment boundary conditions. The heterogeneous 2D fractures were implemented according to a variety of in plane heterogeneity patterns. Multiple immobile zones were considered including stagnant zones, infillings, altered wall rock, and intact rock. During H-14, JNC carried out extensive studies of the distributed rock zone (DRZ) surrounding repository tunnels and drifts. Golder supported this activity be evaluating the calculation time necessary for simulating a reference heterogeneous DRZ cell network for a range of computational strategies. To support the development of JNC's total system performance assessment (TSPA) strategy, Golder carried out a review of the US DOE Yucca Mountain Project TSPA. This
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Lattice Boltzmann model for simulating immiscible two-phase flows
International Nuclear Information System (INIS)
Reis, T; Phillips, T N
2007-01-01
The lattice Boltzmann equation is often promoted as a numerical simulation tool that is particularly suitable for predicting the flow of complex fluids. This paper develops a two-dimensional 9-velocity (D2Q9) lattice Boltzmann model for immiscible binary fluids with variable viscosities and density ratio using a single relaxation time for each fluid. In the macroscopic limit, this model is shown to recover the Navier-Stokes equations for two-phase flows. This is achieved by constructing a two-phase component of the collision operator that induces the appropriate surface tension term in the macroscopic equations. A theoretical expression for surface tension is determined. The validity of this analysis is confirmed by comparing numerical and theoretical predictions of surface tension as a function of density. The model is also shown to predict Laplace's law for surface tension and Poiseuille flow of layered immiscible binary fluids. The spinodal decomposition of two fluids of equal density but different viscosity is then studied. At equilibrium, the system comprises one large low viscosity bubble enclosed by the more viscous fluid in agreement with theoretical arguments of Renardy and Joseph (1993 Fundamentals of Two-Fluid Dynamics (New York: Springer)). Two other simulations, namely the non-equilibrium rod rest and the coalescence of two bubbles, are performed to show that this model can be used to simulate two fluids with a large density ratio
Comparative measurements with seven rainfall simulators on uniform bare fallow land
Iserloh, T.; Ries, J.B.; Cerda, A.; Echeverria, M.T.; Fister, W.; Geissler, C.; Kuhn, N.J.; Leon, F.J.; Peters, P.; Schindewolf, M.; Schmidt, J.; Scholten, T.; Seeger, K.M.
2013-01-01
To assess the influence of rainfall simulator type and plot dimensions on runoff and erosion, seven small portable rainfall simulators from Freiberg, Tubingen, Trier (all Germany), Valencia, Zaragoza (both Spain), Basel (Switzerland) and Wageningen (the Netherlands) were compared on a prepared bare
Energy Technology Data Exchange (ETDEWEB)
Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL
2010-11-01
This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.
Isaranuwatchai, Wanrudee; Brydges, Ryan; Carnahan, Heather; Backstein, David; Dubrowski, Adam
2014-01-01
While the ultimate goal of simulation training is to enhance learning, cost-effectiveness is a critical factor. Research that compares simulation training in terms of educational- and cost-effectiveness will lead to better-informed curricular decisions. Using previously published data we conducted a cost-effectiveness analysis of three…
MACC regional multi-model ensemble simulations of birch pollen dispersion in Europe
Sofiev, M.; Berger, U.; Prank, M.; Vira, J.; Arteta, J.; Belmonte, J.; Bergmann, K.C.; Chéroux, F.; Elbern, H.; Friese, E.; Galan, C.; Gehrig, R.; Khvorostyanov, D.; Kranenburg, R.; Kumar, U.; Marécal, V.; Meleux, F.; Menut, L.; Pessi, A.M.; Robertson, L.; Ritenberga, O.; Rodinkova, V.; Saarto, A.; Segers, A.; Severova, E.; Sauliene, I.; Siljamo, P.; Steensen, B.M.; Teinemaa, E.; Thibaudon, M.; Peuch, V.H.
2015-01-01
This paper presents the first ensemble modelling experiment in relation to birch pollen in Europe. The seven-model European ensemble of MACC-ENS, tested in trial simulations over the flowering season of 2010, was run through the flowering season of 2013. The simulations have been compared with
Breimer, Gerben E.; Haji, Faizal A.; Bodani, Vivek; Cunningham, Melissa S.; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M.
BACKGROUND: The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." OBJECTIVE: To compare and identify the relative utility of a physical and VR ETV simulation model for use in
Prasad, K.
2017-12-01
Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and
Comparative study of computational model for pipe whip analysis
International Nuclear Information System (INIS)
Koh, Sugoong; Lee, Young-Shin
1993-01-01
Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)
Modele bicamerale comparate. Romania: Monocameralism versus bicameralism
Directory of Open Access Journals (Sweden)
Cynthia Carmen CURT
2007-06-01
Full Text Available The paper attempts to evaluate the Romanian bicameral model as well as to identify and critically asses which are the options our country has in choosing between unicameral and bicameral system. The analysis attempts to observe the characteristics of some Second Chambers that are related to Romanian bicameralism by influencing the configuration of the Romanian bicameral legislature, or which devised constitutional mechanisms can be used in order to preserve an efficient bicameral formula. Also the alternative of giving up the bicameral formula due to some arguments related to the simplification and the efficiency of the legislative procedure is explored.
A Model for Comparing Free Cloud Platforms
Directory of Open Access Journals (Sweden)
Radu LIXANDROIU
2014-01-01
Full Text Available VMware, VirtualBox, Virtual PC and other popular desktop virtualization applications are used only by a few users of IT techniques. This article attempts to make a comparison model for choosing the best cloud platform. Many virtualization applications such as VMware (VMware Player, Oracle VirtualBox and Microsoft Virtual PC are free for home users. The main goal of the virtualization software is that it allows users to run multiple operating systems simultane-ously on one virtual environment, using one computer desktop.
Developing Cognitive Models for Social Simulation from Survey Data
Alt, Jonathan K.; Lieberman, Stephen
The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
A Geostationary Earth Orbit Satellite Model Using Easy Java Simulation
Wee, Loo Kang; Goh, Giam Hwee
2013-01-01
We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic…
Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach
Directory of Open Access Journals (Sweden)
Dan Zhou
2015-01-01
Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.
Forecasting Lightning Threat using Cloud-resolving Model Simulations
McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.
2009-01-01
As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating
Four Models of In Situ Simulation
DEFF Research Database (Denmark)
Musaeus, Peter; Krogh, Kristian; Paltved, Charlotte
2014-01-01
Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest that there are f......Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest...... that there are four fruitful approaches to in situ simulation: (1) In situ simulation informed by reported critical incidents and adverse events from emergency departments (ED) in which team training is about to be conducted to write scenarios. (2) In situ simulation through ethnographic studies at the ED. (3) Using...... the following processes: Transition processes, Action processes and Interpersonal processes. Design and purpose This abstract suggests four approaches to in situ simulation. A pilot study will evaluate the different approaches in two emergency departments in the Central Region of Denmark. Methods The typology...
Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling
Kadoura, Ahmad Salim
2016-01-01
This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method
Modelling toolkit for simulation of maglev devices
Peña-Roche, J.; Badía-Majós, A.
2017-01-01
A stand-alone App1 has been developed, focused on obtaining information about relevant engineering properties of magnetic levitation systems. Our modelling toolkit provides real time simulations of 2D magneto-mechanical quantities for superconductor (SC)/permanent magnet structures. The source code is open and may be customised for a variety of configurations. Ultimately, it relies on the variational statement of the critical state model for the superconducting component and has been verified against experimental data for YBaCuO/NdFeB assemblies. On a quantitative basis, the values of the arising forces, induced superconducting currents, as well as a plot of the magnetic field lines are displayed upon selection of an arbitrary trajectory of the magnet in the vicinity of the SC. The stability issues related to the cooling process, as well as the maximum attainable forces for a given material and geometry are immediately observed. Due to the complexity of the problem, a strategy based on cluster computing, database compression, and real-time post-processing on the device has been implemented.
Simulation and Modeling Application in Agricultural Mechanization
Directory of Open Access Journals (Sweden)
R. M. Hudzari
2012-01-01
Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.
International Nuclear Information System (INIS)
Hammond, Glenn E.; Cygan, Randall Timothy
2007-01-01
Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given
Do downscaled general circulation models reliably simulate historical climatic conditions?
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2018-01-01
The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.
Simulating radial dose of ion tracks in liquid water simulated with Geant4-DNA: A comparative study
Czech Academy of Sciences Publication Activity Database
Incerti, S.; Psaltaki, M.; Gillet, P.; Barberet, P.; Bardies, M.; Bernal, M. A.; Bordage, M. C.; Breton, V.; Davídková, Marie; Delage, E.; El Bitar, Z.; Francis, Z.; Guatelli, S.; Ivanchenko, A.; Ivanchenko, V.; Karamitros, M.; Lee, S. B.; Maigne, L.; Meylan, S.; Murakami, K.; Nieminen, P.; Payno, H.; Perrot, Y.; Petrovic, I.; Pham, Q. T.; Ristic-Fira, A.; Santin, G.; Sasaki, T.; Seznec, H.; Shin, J. I.; Štěpán, Václav; Tran, H. N.; Villagrasa, C.
2014-01-01
Roč. 333, AUG (2014), s. 92-98 ISSN 0168-583X Institutional support: RVO:61389005 Keywords : Monte-Carlo- Simulation * Tissue-Equivalent gas * heavy-ion * Alpha-Beams * Particles * Dosimetry * Protons * Models * Codes * Path Subject RIV: BO - Biophysics Impact factor: 1.124, year: 2014
Biologically based modelling and simulation of carcinogenesis at low doses
International Nuclear Information System (INIS)
Ouchi, Noriyuki B.
2003-01-01
The process of the carcinogenesis is studied by computer simulation. In general, we need a large number of experimental samples to detect mutations at low doses, but in practice it is difficult to get such a large number of data. To satisfy the requirements of the situation at low doses, it is good to study the process of carcinogenesis using biologically based mathematical model. We have mainly studied it by using as known as 'multi-stage model'; the model seems to get complicated, as we adopt the recent new findings of molecular biological experiments. Moreover, the basic idea of the multi-stage model is based on the epidemiologic data of log-log variation of cancer incidence with age, it seems to be difficult to compare with experimental data of irradiated cell culture system, which has been increasing in recent years. Taking above into consideration, we concluded that we had better make new model with following features: 1) a unit of the target system is a cell, 2) the new information of the molecular biology can be easily introduced, 3) having spatial coordinates for checking a colony formation or tumorigenesis. In this presentation, we will show the detail of the model and some simulation results about the carcinogenesis. (author)
Analysis of Intelligent Transportation Systems Using Model-Driven Simulations
Directory of Open Access Journals (Sweden)
Alberto Fernández-Isabel
2015-06-01
Full Text Available Intelligent Transportation Systems (ITSs integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use.
A Probabilistic Model of Meter Perception: Simulating Enculturation
Directory of Open Access Journals (Sweden)
Bastiaan van der Weij
2017-05-01
Full Text Available Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.
Modeling and Simulation of a lab-scale Fluidised Bed
Directory of Open Access Journals (Sweden)
Britt Halvorsen
2002-04-01
Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.
Modelling of windmill induction generators in dynamic simulation programs
DEFF Research Database (Denmark)
Akhmatov, Vladislav; Knudsen, Hans
1999-01-01
with and without a model of the mechanical shaft. The reason for the discrepancies are explained, and it is shown that the phenomenon is due partly to the presence of DC offset currents in the induction machine stator, and partly to the mechanical shaft system of the wind turbine and the generator rotor......For AC networks with large amounts of induction generators-in case of e.g. windmills-the paper demonstrates a significant discrepancy in the simulated voltage recovery after faults in weak networks, when comparing result obtained with dynamic stability programs and transient programs, respectively....... It is shown that it is possible to include a transient model in dynamic stability programs and thus obtain correct results also in dynamic stability programs. A mechanical model of the shaft system has also been included in the generator model...
Bond slip model for the simulation of reinforced concrete structures
International Nuclear Information System (INIS)
Casanova, A.; Jason, L.; Davenne, L.
2012-01-01
This paper presents a new finite element approach to model the steel-concrete bond effects. This model proposes to relate steel, represented by truss elements, with the surrounding concrete in the case where the two meshes are not necessary coincident. The theoretical formulation is described and the model is applied on a reinforced concrete tie. A characteristic stress distribution is observed, related to the transfer of bond forces from steel to concrete. The results of this simulation are compared with a computation in which a perfect relation between steel and concrete is supposed. It clearly shows how the introduction of the bond model can improve the description of the cracking process (finite number of cracks). (authors)
Fracture network modeling and GoldSim simulation support
International Nuclear Information System (INIS)
Sugita, Kenichiro; Dershowitz, William
2004-01-01
During Heisei-15, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU Underground Rock Laboratory support during H-15 involved development of new discrete fracture network (DFN) models for the MIU Shoba-sama Site, in the region of shaft development. Golder developed three DFN models for the site using discrete fracture network, equivalent porous medium (EPM), and nested DFN/EPM approaches. Each of these models were compared based upon criteria established for the multiple modeling project (MMP). Golder supported JNC participation in Task 6AB, 6D and 6E of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-15. For Task 6AB, Golder implemented an updated microstructural model in GoldSim, and used this updated model to simulate the propagation of uncertainty from experimental to safety assessment time scales, for 5 m scale transport path lengths. Task 6D and 6E compared safety assessment (PA) and experimental time scale simulations in a 200 m scale discrete fracture network. For Task 6D, Golder implemented a DFN model using FracMan/PA Works, and determined the sensitivity of solute transport to a range of material property and geometric assumptions. For Task 6E, Golder carried out demonstration FracMan/PA Works transport calculations at a 1 million year time scale, to ensure that task specifications are realistic. The majority of work for Task 6E will be carried out during H-16. During H-15, Golder supported JNC's Total System Performance Assessment (TSPO) strategy by developing technologies for the analysis of precipitant concentration. These approaches were based on the GoldSim precipitant data management features, and were
Modelization and simulation of capillary barriers
International Nuclear Information System (INIS)
Lisbona Cortes, F.; Aguilar Villa, G.; Clavero Gracia, C.; Gracia Lozano, J.L.
1998-01-01
Among the different underground transport phenomena, that due to water flows is of great relevance. Water flows in infiltration and percolation processes are responsible of the transport of hazardous wastes towards phreatic layers. From the industrial and geological standpoints, there is a great interest in the design of natural devices to avoid the flows transporting polluting substances. This interest is increased when devices are used to isolate radioactive waste repositories, whose life is to be longer than several hundred years. The so-called natural devices are those based on the superimposition of material with different hydraulic properties. In particular, the flow retention in this kind stratified media, in unsaturated conditions, is basically due to the capillary barrier effect, resulting from placing a low conductivity material over another with a high hydraulic conductivity. Covers designed from the effect above have also to allow a drainage of the upper layer. The lower cost of these covers, with respect to other kinds of protection systems, and the stability in time of their components make them very attractive. However, a previous investigation to determine their effectivity is required. In this report we present the computer code BCSIM, useful for easy simulations of unsaturated flows in a capillary barrier configuration with drainage, and which is intended to serve as a tool for designing efficient covers. The model, the numerical algorithm and several implementation aspects are described. Results obtained in several simulations, confirming the effectivity of capillary barriers as a technique to build safety covers for hazardous waste repositories, are presented. (Author)
Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review
Directory of Open Access Journals (Sweden)
Niko Speybroeck
2013-11-01
Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.
An electrical circuit model for simulation of indoor radon concentration.
Musavi Nasab, S M; Negarestani, A
2013-01-01
In this study, a new model based on electric circuit theory was introduced to simulate the behaviour of indoor radon concentration. In this model, a voltage source simulates radon generation in walls, conductivity simulates migration through walls and voltage across a capacitor simulates radon concentration in a room. This simulation considers migration of radon through walls by diffusion mechanism in one-dimensional geometry. Data reported in a typical Greek house were employed to examine the application of this technique of simulation to the behaviour of radon.
Aircraft vulnerability analysis by modeling and simulation
Willers, Cornelius J.; Willers, Maria S.; de Waal, Alta
2014-10-01
guidance acceleration and seeker sensitivity. For the purpose of this investigation the aircraft is equipped with conventional pyrotechnic decoy flares and the missile has no counter-countermeasure means (security restrictions on open publication). This complete simulation is used to calculate the missile miss distance, when the missile is launched from different locations around the aircraft. The miss distance data is then graphically presented showing miss distance (aircraft vulnerability) as a function of launch direction and range. The aircraft vulnerability graph accounts for aircraft and missile characteristics, but does not account for missile deployment doctrine. A Bayesian network is constructed to fuse the doctrinal rules with the aircraft vulnerability data. The Bayesian network now provides the capability to evaluate the combined risk of missile launch and aircraft vulnerability. It is shown in this paper that it is indeed possible to predict the aircraft vulnerability to missile attack in a comprehensive modelling and a holistic process. By using the appropriate real-world models, this approach is used to evaluate the effectiveness of specific countermeasure techniques against specific missile threats. The use of a Bayesian network provides the means to fuse simulated performance data with more abstract doctrinal rules to provide a realistic assessment of the aircraft vulnerability.
Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction
Ghasemi, Mohammadreza
2015-02-23
In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Simulation on Poisson and negative binomial models of count road accident modeling
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Rotz, C A; Isenberg, B J; Stackhouse-Lawson, K R; Pollak, E J
2013-11-01
A methodology was developed and used to determine environmental footprints of beef cattle produced at the U.S. Meat Animal Research Center (MARC) in Clay Center, NE, with the goal of quantifying improvements achieved over the past 40 yr. Information for MARC operations was gathered and used to establish parameters representing their production system with the Integrated Farm System Model. The MARC farm, cow-calf, and feedlot operations were each simulated over recent historical weather to evaluate performance, environmental impact, and economics. The current farm operation included 841 ha of alfalfa and 1,160 ha of corn to produce feed predominately for the beef herd of 5,500 cows, 1,180 replacement cattle, and 3,724 cattle finished per year. Spring and fall cow-calf herds were fed on 9,713 ha of pastureland supplemented through the winter with hay and silage produced by the farm operation. Feedlot cattle were backgrounded for 3 mo on hay and silage with some grain and finished over 7 mo on a diet high in corn and wet distillers grain. For weather year 2011, simulated feed production and use, energy use, and production costs were within 1% of actual records. A 25-yr simulation of their current production system gave an average annual carbon footprint of 10.9±0.6 kg of CO2 equivalent units per kg BW sold, and the energy required to produce that beef (energy footprint) was 26.5±4.5 MJ/kg BW. The annual water required (water footprint) was 21,300±5,600 L/kg BW sold, and the water footprint excluding precipitation was 2,790±910 L/kg BW. The simulated annual cost of producing their beef was US$2.11±0.05/kg BW. Simulation of the production practices of 2005 indicated that the inclusion of distillers grain in animal diets has had a relatively small effect on environmental footprints except that reactive nitrogen loss has increased 10%. Compared to 1970, the carbon footprint of the beef produced has decreased 6% with no change in the energy footprint, a 3% reduction
A heterogeneous lattice gas model for simulating pedestrian evacuation
Guo, Xiwei; Chen, Jianqiao; Zheng, Yaochen; Wei, Junhong
2012-02-01
Based on the cellular automata method (CA model) and the mobile lattice gas model (MLG model), we have developed a heterogeneous lattice gas model for simulating pedestrian evacuation processes in an emergency. A local population density concept is introduced first. The update rule in the new model depends on the local population density and the exit crowded degree factor. The drift D, which is one of the key parameters influencing the evacuation process, is allowed to change according to the local population density of the pedestrians. Interactions including attraction, repulsion, and friction between every two pedestrians and those between a pedestrian and the building wall are described by a nonlinear function of the corresponding distance, and the repulsion forces increase sharply as the distances get small. A critical force of injury is introduced into the model, and its effects on the evacuation process are investigated. The model proposed has heterogeneous features as compared to the MLG model or the basic CA model. Numerical examples show that the model proposed can capture the basic features of pedestrian evacuation, such as clogging and arching phenomena.
Guidelines for Reproducibly Building and Simulating Systems Biology Models.
Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R
2016-10-01
Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.
Effect of different heat transfer models on HCCI engine simulation
International Nuclear Information System (INIS)
Neshat, Elaheh; Saray, Rahim Khoshbakhti
2014-01-01
Highlights: • A new multi zone model is developed for HCCI combustion modeling. • New heat transfer model is used for prediction of heat transfer in HCCI engines. • Model can predict engine combustion, performance and emission characteristics well. • Appropriate mass and heat transfer models cause to accurate prediction of CO, UHC and NOx. - Abstract: Heat transfer from engine walls has an important role on engine combustion, performance and emission characteristics. The main focus of this study is offering a new relation for calculation of convective heat transfer from in-cylinder charge to combustion chamber walls of HCCI engines and providing the ability of new model in comparison with the previous models. Therefore, a multi zone model is developed for homogeneous charge compression ignition engine simulation. Model consists of four different types of zones including core zone, boundary layer zone, outer zones, which are between core and boundary layer, and crevice zone. Conductive heat transfer and mass transfer are considered between neighboring zones. For accurate calculation of initial conditions at inlet valve closing, multi zone model is coupled with a single zone model, which simulates gas exchange process. Various correlations are used as convective heat transfer correlations. Woschni, modified Woschni, Hohenberg and Annand correlations are used as convective heat transfer models. The new convection model, developed by authors, is used, too. Comparative analyses are done to recognize the accurate correlation for prediction of engine combustion, performance and emission characteristics in a wide range of operating conditions. The results indicate that utilization of various heat transfer models, except for new convective heat transfer model, leads to significant differences in prediction of in-cylinder pressure and exhaust emissions. Using Woschni, Chang and new model, convective heat transfer coefficient increases near top dead center, sharply
Beyond Modeling: All-Atom Olfactory Receptor Model Simulations
Directory of Open Access Journals (Sweden)
Peter C Lai
2012-05-01
Full Text Available Olfactory receptors (ORs are a type of GTP-binding protein-coupled receptor (GPCR. These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can validate experimental functional studies as well as generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level. Here we have shown the specific advantages of simulating the dynamic environment that is associated with OR-odorant interactions. We present a rigorous methodology that ranges from the creation of a computationally-derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.
Partridge, Daniel; Morales, Ricardo; Stier, Philip
2015-04-01
Many previous studies have compared droplet activation parameterisations against adiabatic parcel models (e.g. Ghan et al., 2001). However, these have often involved comparisons for a limited number of parameter combinations based upon certain aerosol regimes. Recent studies (Morales et al., 2014) have used wider ranges when evaluating their parameterisations, however, no study has explored the full possible multi-dimensional parameter space that would be experienced by droplet activations within a global climate model (GCM). It is important to be able to efficiently highlight regions of the entire multi-dimensional parameter space in which we can expect the largest discrepancy between parameterisation and cloud parcel models in order to ascertain which regions simulated by a GCM can be expected to be a less accurate representation of the process of cloud droplet activation. This study provides a new, efficient, inverse modelling framework for comparing droplet activation parameterisations to more complex cloud parcel models. To achieve this we couple a Markov Chain Monte Carlo algorithm (Partridge et al., 2012) to two independent adiabatic cloud parcel models and four droplet activation parameterisations. This framework is computationally faster than employing a brute force Monte Carlo simulation, and allows us to transparently highlight which parameterisation provides the closest representation across all aerosol physiochemical and meteorological environments. The parameterisations are demonstrated to perform well for a large proportion of possible parameter combinations, however, for certain key parameters; most notably the vertical velocity and accumulation mode aerosol concentration, large discrepancies are highlighted. These discrepancies correspond for parameter combinations that result in very high/low simulated values of maximum supersaturation. By identifying parameter interactions or regimes within the multi-dimensional parameter space we hope to guide
Influence of wheel-rail contact modelling on vehicle dynamic simulation
Burgelman, Nico; Sichani, Matin Sh.; Enblom, Roger; Berg, Mats; Li, Zili; Dollevoet, Rolf
2015-08-01
This paper presents a comparison of four models of rolling contact used for online contact force evaluation in rail vehicle dynamics. Until now only a few wheel-rail contact models have been used for online simulation in multibody software (MBS). Many more models exist and their behaviour has been studied offline, but a comparative study of the mutual influence between the calculation of the creep forces and the simulated vehicle dynamics seems to be missing. Such a comparison would help researchers with the assessment of accuracy and calculation time. The contact methods investigated in this paper are FASTSIM, Linder, Kik-Piotrowski and Stripes. They are compared through a coupling between an MBS for the vehicle simulation and Matlab for the contact models. This way the influence of the creep force calculation on the vehicle simulation is investigated. More specifically this study focuses on the influence of the contact model on the simulation of the hunting motion and on the curving behaviour.
The dynamics of human-water systems: comparing observations and simulations
Di Baldassarre, G.; Ciullo, A.; Castellarin, A.; Viglione, A.
2016-12-01
Real-word data of human-flood interactions are compared to the results of stylized socio-hydrological models. These models build on numerous examples from different parts of the world and consider two main prototypes of floodplain systems. Green systems, whereby societies cope with flood risk via non-structural measures, e.g. resettling out of floodplain areas ("living with floods" approach); and Technological systems, whereby societies cope with flood risk by also via structural measures, e.g. building levees ("fighting floods" approach). The floodplain systems of the Tiber River in Rome and the Ganges-Brahmaputra-Meghna Rivers in Bangladesh systems are used as case studies. The comparison of simulations and observations shows the potential of socio-hydrological models in capturing the dynamics of risk emerging from the interactions and feedbacks between social and hydrological processes, such as learning and forgetting effects. It is then discussed how the proposed approach can contribute to a better understanding of flood risk changes and therefore support the process of disaster risk reduction.
Comparing methods of targeting obesity interventions in populations: An agent-based simulation.
Beheshti, Rahmatollah; Jalalpour, Mehdi; Glass, Thomas A
2017-12-01
Social networks as well as neighborhood environments have been shown to effect obesity-related behaviors including energy intake and physical activity. Accordingly, harnessing social networks to improve targeting of obesity interventions may be promising to the extent this leads to social multiplier effects and wider diffusion of intervention impact on populations. However, the literature evaluating network-based interventions has been inconsistent. Computational methods like agent-based models (ABM) provide researchers with tools to experiment in a simulated environment. We develop an ABM to compare conventional targeting methods (random selection, based on individual obesity risk, and vulnerable areas) with network-based targeting methods. We adapt a previously published and validated model of network diffusion of obesity-related behavior. We then build social networks among agents using a more realistic approach. We calibrate our model first against national-level data. Our results show that network-based targeting may lead to greater population impact. We also present a new targeting method that outperforms other methods in terms of intervention effectiveness at the population level.
Stratospheric dryness: model simulations and satellite observations
Directory of Open Access Journals (Sweden)
J. Lelieveld
2007-01-01
Full Text Available The mechanisms responsible for the extreme dryness of the stratosphere have been debated for decades. A key difficulty has been the lack of comprehensive models which are able to reproduce the observations. Here we examine results from the coupled lower-middle atmosphere chemistry general circulation model ECHAM5/MESSy1 together with satellite observations. Our model results match observed temperatures in the tropical lower stratosphere and realistically represent the seasonal and inter-annual variability of water vapor. The model reproduces the very low water vapor mixing ratios (below 2 ppmv periodically observed at the tropical tropopause near 100 hPa, as well as the characteristic tape recorder signal up to about 10 hPa, providing evidence that the dehydration mechanism is well-captured. Our results confirm that the entry of tropospheric air into the tropical stratosphere is forced by large-scale wave dynamics, whereas radiative cooling regionally decelerates upwelling and can even cause downwelling. Thin cirrus forms in the cold air above cumulonimbus clouds, and the associated sedimentation of ice particles between 100 and 200 hPa reduces water mass fluxes by nearly two orders of magnitude compared to air mass fluxes. Transport into the stratosphere is supported by regional net radiative heating, to a large extent in the outer tropics. During summer very deep monsoon convection over Southeast Asia, centered over Tibet, moistens the stratosphere.
Design, modeling, simulation and evaluation of a distributed energy system
Cultura, Ambrosio B., II
This dissertation presents the design, modeling, simulation and evaluation of distributed energy resources (DER) consisting of photovoltaics (PV), wind turbines, batteries, a PEM fuel cell and supercapacitors. The distributed energy resources installed at UMass Lowell consist of the following: 2.5kW PV, 44kWhr lead acid batteries and 1500W, 500W & 300W wind turbines, which were installed before year 2000. Recently added to that are the following: 10.56 kW PV array, 2.4 kW wind turbine, 29 kWhr Lead acid batteries, a 1.2 kW PEM fuel cell and 4-140F supercapacitors. Each newly added energy resource has been designed, modeled, simulated and evaluated before its integration into the existing PV/Wind grid-connected system. The Mathematical and Simulink model of each system was derived and validated by comparing the simulated and experimental results. The Simulated results of energy generated from a 10.56kW PV system are in good agreement with the experimental results. A detailed electrical model of a 2.4kW wind turbine system equipped with a permanent magnet generator, diode rectifier, boost converter and inverter is presented. The analysis of the results demonstrates the effectiveness of the constructed simulink model, and can be used to predict the performance of the wind turbine. It was observed that a PEM fuel cell has a very fast response to load changes. Moreover, the model has validated the actual operation of the PEM fuel cell, showing that the simulated results in Matlab Simulink are consistent with the experimental results. The equivalent mathematical equation, derived from an electrical model of the supercapacitor, is used to simulate its voltage response. The model is completely capable of simulating its voltage behavior, and can predict the charge time and discharge time of voltages on the supercapacitor. The bi-directional dc-dc converter was designed in order to connect the 48V battery bank storage to the 24V battery bank storage. This connection was
IR characteristic simulation of city scenes based on radiosity model
Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu
2013-09-01
Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.
Integrated predictive modelling simulations of burning plasma experiment designs
International Nuclear Information System (INIS)
Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H
2003-01-01
Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied
Simulation of hybrid vehicle propulsion with an advanced battery model
Energy Technology Data Exchange (ETDEWEB)
Nallabolu, S.; Kostetzer, L.; Rudnyi, E. [CADFEM GmbH, Grafing (Germany); Geppert, M.; Quinger, D. [LION Smart GmbH, Frieding (Germany)
2011-07-01
In the recent years there has been observed an increasing concern about global warming and greenhouse gas emissions. In addition to the environmental issues the predicted scarcity of oil supplies and the dramatic increase in oil price puts new demands on vehicle design. As a result energy efficiency and reduced emission have become one of main selling point for automobiles. Hybrid electric vehicles (HEV) have therefore become an interesting technology for the governments and automotive industries. HEV are more complicated compared to conventional vehicles due to the fact that these vehicles contain more electrical components such as electric machines, power electronics, electronic continuously variable transmissions (CVT), and embedded powertrain controllers. Advanced energy storage devices and energy converters, such as Li-ion batteries, ultracapacitors, and fuel cells are also considered. A detailed vehicle model used for an energy flow analysis and vehicle performance simulation is necessary. Computer simulation is indispensible to facilitate the examination of the vast hybrid electric vehicle design space with the aim to predict the vehicle performance over driving profiles, estimate fuel consumption and the pollution emissions. There are various types of mathematical models and simulators available to perform system simulation of vehicle propulsion. One of the standard methods to model the complete vehicle powertrain is ''backward quasistatic modeling''. In this method vehicle subsystems are defined based on experiential models in the form of look-up tables and efficiency maps. The interaction between adjacent subsystems of the vehicle is defined through the amount of power flow. Modeling the vehicle subsystems like motor, engine, gearbox and battery is under this technique is based on block diagrams. The vehicle model is applied in two case studies to evaluate the vehicle performance and fuel consumption. In the first case study the affect