The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
Simulation of nonlinear wave run-up with a high-order Boussinesq model
DEFF Research Database (Denmark)
Fuhrman, David R.; Madsen, Per A.
2008-01-01
This paper considers the numerical simulation of nonlinear wave run-up within a highly accurate Boussinesq-type model. Moving wet–dry boundary algorithms based on so-called extrapolating boundary techniques are utilized, and a new variant of this approach is proposed in two horizontal dimensions....... As validation, computed results involving the nonlinear run-up of periodic as well as transient waves on a sloping beach are considered in a single horizontal dimension, demonstrating excellent agreement with analytical solutions for both the free surface and horizontal velocity. In two horizontal dimensions...... cases involving long wave resonance in a parabolic basin, solitary wave evolution in a triangular channel, and solitary wave run-up on a circular conical island are considered. In each case the computed results compare well against available analytical solutions or experimental measurements. The ability...
Running Parallel Discrete Event Simulators on Sierra
Energy Technology Data Exchange (ETDEWEB)
Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-12-03
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
Directory of Open Access Journals (Sweden)
Mondry Adrian
2004-08-01
Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods
Debris flow run-out simulation and analysis using a dynamic model
Melo, Raquel; van Asch, Theo; Zêzere, José L.
2018-02-01
Only two months after a huge forest fire occurred in the upper part of a valley located in central Portugal, several debris flows were triggered by intense rainfall. The event caused infrastructural and economic damage, although no lives were lost. The present research aims to simulate the run-out of two debris flows that occurred during the event as well as to calculate via back-analysis the rheological parameters and the excess rain involved. Thus, a dynamic model was used, which integrates surface runoff, concentrated erosion along the channels, propagation and deposition of flow material. Afterwards, the model was validated using 32 debris flows triggered during the same event that were not considered for calibration. The rheological and entrainment parameters obtained for the most accurate simulation were then used to perform three scenarios of debris flow run-out on the basin scale. The results were confronted with the existing buildings exposed in the study area and the worst-case scenario showed a potential inundation that may affect 345 buildings. In addition, six streams where debris flow occurred in the past and caused material damage and loss of lives were identified.
Directory of Open Access Journals (Sweden)
Sudeshkumar Ponnusamy Moranahalli
2011-01-01
Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
Directory of Open Access Journals (Sweden)
F. Løvholt
2013-06-01
Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.
Integrated building and system simulation using run-time coupled distributed models
Trcka, M.; Hensen, J.L.M.; Wijsman, A.J.T.M.
2006-01-01
In modeling and simulation of real building and heating, ventilating, and air-conditioning (HVAC) system configurations, it is frequently found that certain parts can be represented in one simulation software, while models for other parts of the configuration are only available in other software.
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models
Penn, John M.; Lin, Alexander S.
2016-01-01
This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.
Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation
National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...
Jongschaap, R.E.E.
2006-01-01
Dynamic simulations models may enable for farmers the evaluation of crop and soil management strategies, or may trigger crop and soil management strategies if they are used as warning systems, e.g. for drought risks and for nutrients shortage. Predictions by simulation models may differ from field
Energy Technology Data Exchange (ETDEWEB)
Martensen, Nis; Troester, Eckehard [energynautics GmbH, Langen (Germany); Lund, Per [Energinet.dk, Fredericia (Denmark); Holland, Rod [Spirae Inc., Fort Collins, CO (United States)
2009-07-01
In emergency situations, the Cell Controller disconnects a distribution grid from the high-voltage network and controls the cell's island operation. The controller thus activates the existing local generation plants to improve the security of supply. The Cell Controller can operate the Cell as a Virtual Power Plant during normal grid-connected operation, thereby implementing an exemplary Smart Grid. Modeling and simulation work is presented. (orig.)
Humans running in place on water at simulated reduced gravity.
Directory of Open Access Journals (Sweden)
Alberto E Minetti
Full Text Available BACKGROUND: On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. METHODOLOGY/PRINCIPAL FINDINGS: We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. CONCLUSIONS/SIGNIFICANCE: The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.
1979-12-01
An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...
ATLAS simulation of boson plus jets processes in Run 2
The ATLAS collaboration
2017-01-01
This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.
Giving students the run of sprinting models
Heck, André; Ellermeijer, Ton
2009-11-01
A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.
Polarization simulations in the RHIC run 15 lattice
Energy Technology Data Exchange (ETDEWEB)
Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Luo, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Ranjbar, V. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Robert-Demolaize, G. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; White, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.
2015-05-03
RHIC polarized proton Run 15 uses a new acceleration ramp optics, compared to RHIC Run 13 and earlier runs, in relation with electron-lens beam-beam compensation developments. The new optics induces different strengths in the depolarizing snake resonance sequence, from injection to top energy. As a consequence, polarization transport along the new ramp has been investigated, based on spin tracking simulations. Sample results are reported and discussed.
How Many Times Should One Run a Computational Simulation?
DEFF Research Database (Denmark)
Seri, Raffaello; Secchi, Davide
2017-01-01
This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Numerical Modelling of Wave Run-Up
DEFF Research Database (Denmark)
Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke
2011-01-01
Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...
Running-mass inflation model and WMAP
International Nuclear Information System (INIS)
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
2004-01-01
We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable
Aviation Safety Simulation Model
Houser, Scott; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.
Running vacuum cosmological models: linear scalar perturbations
Energy Technology Data Exchange (ETDEWEB)
Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)
2017-08-01
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
Model for radionuclide transport in running waters
Energy Technology Data Exchange (ETDEWEB)
Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)
2005-11-15
Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment
Model for radionuclide transport in running waters
International Nuclear Information System (INIS)
Jonsson, Karin; Elert, Mark
2005-11-01
Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment
Numerical simulation of transoceanic propagation and run-up of tsunami
Energy Technology Data Exchange (ETDEWEB)
Cho, Yong-Sik; Yoon Sung-Bum [Hanyang University, Seoul(Korea)
2001-04-30
The propagation and associated run-up process of tsunami are numerically investigated in this study. A transoceanic propagation model is first used to simulate the distant propagation of tsunamis. An inundation model is then employed to simulate the subsequent run-up process near coastline. A case study is done for the 1960 Chilean tsunami. A detailed maximum inundation map at Hilo Bay is obtained and compared with field observation and other numerical model, predictions. A very reasonable agreement is observed. (author). refs., tabs., figs.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Directory of Open Access Journals (Sweden)
Susanne Kunkel
2017-06-01
Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS
Directory of Open Access Journals (Sweden)
Benazir
2017-09-01
Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.
1-D blood flow modelling in a running human body.
Szabó, Viktor; Halász, Gábor
2017-07-01
In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Energy Technology Data Exchange (ETDEWEB)
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
2016-02-15
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
How to help CERN to run more simulations
The LHC@home team
2016-01-01
With LHC@home you can actively contribute to the computing capacity of the Laboratory! You may think that CERN's large Data Centre and the Worldwide LHC Computing Grid have enough computing capacity for all the Laboratory’s users. However, given the massive amount of data coming from LHC experiments and other sources, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. This is an area where you can help, by installing BOINC and running simulations from LHC@home on your office PC or laptop. These background simulations will not disturb your work, as BOINC can be configured to automatically stop computing when your PC is in use. As mentioned in earlier editions of the Bulletin (see here and here), contributions from LHC@home volunteers have played a major role in LHC beam simulation studies. The computing capacity they made available corresponds to about half the capacity of the CERN...
Simulating three dimensional wave run-up over breakwaters covered by antifer units
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
Simulating three dimensional wave run-up over breakwaters covered by antifer units
Directory of Open Access Journals (Sweden)
A. Najafi-Jilani
2014-06-01
Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
runDM: Running couplings of Dark Matter to the Standard Model
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2018-02-01
runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.
Experience gained in running the EPRI MMS code with an in-house simulation language
International Nuclear Information System (INIS)
Weber, D.S.
1987-01-01
The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language
AEGIS geologic simulation model
International Nuclear Information System (INIS)
Foley, M.G.
1982-01-01
The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application
Development of Fast-Running Simulation Methodology Using Neural Networks for Load Follow Operation
International Nuclear Information System (INIS)
Seong, Seung-Hwan; Park, Heui-Youn; Kim, Dong-Hoon; Suh, Yong-Suk; Hur, Seop; Koo, In-Soo; Lee, Un-Chul; Jang, Jin-Wook; Shin, Yong-Chul
2002-01-01
A new fast-running analytic model has been developed for analyzing the load follow operation. The new model was based on the neural network theory, which has the capability of modeling the input/output relationships of a nonlinear system. The new model is made up of two error back-propagation neural networks and procedures to calculate core parameters, such as the distributions and density of xenon in a quasi-steady-state core like load follow operation. One neural network is designed to retrieve the axial offset of power distribution, and the other is for reactivity corresponding to a given core condition. The training data sets for learning the neural networks in the new model are generated with a three-dimensional nodal code and, also, the measured data of the first-day test of load follow operation. Using the new model, the simulation results of the 5-day load follow test in a pressurized water reactor show a good agreement between the simulation data and the actual measured data. Required computing time for simulating a load follow operation is comparable to that of a fast-running lumped model. Moreover, the new model does not require additional engineering factors to compensate for the difference between the actual measurements and analysis results because the neural network has the inherent learning capability of neural networks to new situations
Running-mass inflation model and primordial black holes
International Nuclear Information System (INIS)
Drees, Manuel; Erfani, Encieh
2011-01-01
We revisit the question whether the running-mass inflation model allows the formation of Primordial Black Holes (PBHs) that are sufficiently long-lived to serve as candidates for Dark Matter. We incorporate recent cosmological data, including the WMAP 7-year results. Moreover, we include ''the running of the running'' of the spectral index of the power spectrum, as well as the renormalization group ''running of the running'' of the inflaton mass term. Our analysis indicates that formation of sufficiently heavy, and hence long-lived, PBHs still remains possible in this scenario. As a by-product, we show that the additional term in the inflaton potential still does not allow significant negative running of the spectral index
Modelling surface run-off and trends analysis over India
Indian Academy of Sciences (India)
exponential model was developed between the rainfall and the run-off that predicted the run-off with an R2 of ... precipitation and other climate parameters is well documented ...... Sen P K 1968 Estimates of the regression coefficient based.
Tsunami generation, propagation, and run-up with a high-order Boussinesq model
DEFF Research Database (Denmark)
Fuhrman, David R.; Madsen, Per A.
2009-01-01
In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh landslide-induced tsunamis. The extension is straight forward, requiring only....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...
Simulation model of a PWR power plant
International Nuclear Information System (INIS)
Larsen, N.
1987-03-01
A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)
Dynamics of the in-run in ski jumping: a simulation study.
Ettema, Gertjan J C; Bråten, Steinar; Bobbert, Maarten F
2005-08-01
A ski jumper tries to maintain an aerodynamic position in the in-run during changing environmental forces. The purpose of this study was to analyze the mechanical demands on a ski jumper taking the in-run in a static position. We simulated the in-run in ski jumping with a 4-segment forward dynamic model (foot, leg, thigh, and upper body). The curved path of the in-run was used as kinematic constraint, and drag, lift, and snow friction were incorporated. Drag and snow friction created a forward rotating moment that had to be counteracted by a plantar flexion moment and caused the line of action of the normal force to pass anteriorly to the center of mass continuously. The normal force increased from 0.88 G on the first straight to 1.65 G in the curve. The required knee joint moment increased more because of an altered center of pressure. During the transition from the straight to the curve there was a rapid forward shift of the center of pressure under the foot, reflecting a short but high angular acceleration. Because unrealistically high rates of change of moment are required, an athlete cannot do this without changing body configuration which reduces the required rate of moment changes.
Energy Technology Data Exchange (ETDEWEB)
Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)
2017-05-24
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).
Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution
International Nuclear Information System (INIS)
Gupta, M.K.
1993-10-01
In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length
Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution
Energy Technology Data Exchange (ETDEWEB)
Gupta, M.K.
1993-10-01
In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.
Computing Models of CDF and D0 in Run II
International Nuclear Information System (INIS)
Lammel, S.
1997-05-01
The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II
Computing Models of CDF and D0 in Run II
International Nuclear Information System (INIS)
Lammel, S.
1997-01-01
The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II
Development of a fast running accident analysis computer program for use in a simulator
International Nuclear Information System (INIS)
Cacciabue, P.C.
1985-01-01
This paper describes how a reactor safety nuclear computer program can be modified and improved with the aim of reaching a very fast running tool to be used as a physical model in a plant simulator, without penalizing the accuracy of results. It also discusses some ideas on how the physical theoretical model can be combined to a driving statistical tool for the build up of the entire package of software to be implemented in the simulator for risk and reliability analysis. The approach to the problem, although applied to a specific computer program, can be considered quite general if an already existing and well tested code is being used for the purpose. The computer program considered is ALMOD, originally developed for the analysis of the thermohydraulic and neutronic behaviour of the reactor core, primary circuit and steam generator during operational and special transients. (author)
A VRLA battery simulation model
International Nuclear Information System (INIS)
Pascoe, Phillip E.; Anbuky, Adnan H.
2004-01-01
A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet
12 weeks of simulated barefoot running changes foot-strike patterns in female runners.
McCarthy, C; Fleming, N; Donne, B; Blanksby, B
2014-05-01
To investigate the effect of a transition program of simulated barefoot running (SBR) on running kinematics and foot-strike patterns, female recreational athletes (n=9, age 29 ± 3 yrs) without SBR experience gradually increased running distance in Vibram FiveFingers SBR footwear over 12 weeks. Matched controls (n=10, age 30 ± 4 yrs) continued running in standard footwear. A 3-D motion analysis of treadmill running at 12 km/h(-1) was performed by both groups, barefoot and shod, pre- and post-intervention. Post-intervention data indicated a more-forefoot strike pattern in the SBR group compared to controls; both running barefoot (P>0.05), and shod (Pstrike (Pforefoot strike pattern and "barefoot" kinematics, regardless of preferred footwear. © Georg Thieme Verlag KG Stuttgart · New York.
Thermally-aware composite run-time CPU power models
Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.
2016-01-01
Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...
Hittle, Elizabeth
2011-01-01
In small watersheds, runoff entering local waterways from large storms can cause rapid and profound changes in the streambed that can contribute to flooding. Wymans Run, a small stream in Cochranton Borough, Crawford County, experienced a large rain event in June 2008 that caused sediment to be deposited at a bridge. A hydrodynamic model, Flow and Sediment Transport and Morphological Evolution of Channels (FaSTMECH), which is incorporated into the U.S. Geological Survey Multi-Dimensional Surface-Water Modeling System (MD_SWMS) was constructed to predict boundary shear stress and velocity in Wymans Run using data from the June 2008 event. Shear stress and velocity values can be used to indicate areas of a stream where sediment, transported downstream, can be deposited on the streambed. Because of the short duration of the June 2008 rain event, streamflow was not directly measured but was estimated using U.S. Army Corps of Engineers one-dimensional Hydrologic Engineering Centers River Analysis System (HEC-RAS). Scenarios to examine possible engineering solutions to decrease the amount of sediment at the bridge, including bridge expansion, channel expansion, and dredging upstream from the bridge, were simulated using the FaSTMECH model. Each scenario was evaluated for potential effects on water-surface elevation, boundary shear stress, and velocity.
PEP Run Report for Simulant Shakedown/Functional Testing
Energy Technology Data Exchange (ETDEWEB)
Josephson, Gary B.; Geeting, John GH; Bredt, Ofelia P.; Burns, Carolyn A.; Golovich, Elizabeth C.; Guzman-Leong, Consuelo E.; Kurath, Dean E.; Sevigny, Gary J.
2009-12-29
Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Waste Treatment Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed, and operated as part of a plan to respond to issue M12, "Undemonstrated Leaching Processes." The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing. Two operating scenarios are currently being evaluated for the ultrafiltration process (UFP) and leaching operations. The first scenario has caustic leaching performed in the UFP-2 ultrafiltration feed vessels (i.e., vessel UFP-VSL-T02A in the PEP; and vessels UFP-VSL-00002A and B in the WTP PTF). The second scenario has caustic leaching conducted in the UFP-1 ultrafiltration feed preparation vessels (i.e., vessels UFP-VSL-T01A and B in the PEP; vessels UFP-VSL-00001A and B in the WTP PTF). In both scenarios, 19-M sodium hydroxide solution (NaOH, caustic) is added to the waste slurry in the vessels to leach solid aluminum compounds (e.g., gibbsite, boehmite). Caustic addition is followed by a heating step that uses direct injection of steam to accelerate the leach process. Following the caustic leach, the vessel contents are cooled using vessel cooling jackets and/or external heat exchangers. The main difference between the two scenarios is that for leaching in UFP-1, the 19-M NaOH is added to un-concentrated waste slurry (3-8 wt% solids), while for leaching in UFP-2, the slurry is concentrated to nominally 20 wt% solids using cross-flow ultrafiltration
Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling
Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.
2013-09-01
Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows
Directory of Open Access Journals (Sweden)
Karsten Hollander
Full Text Available Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1 performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1 using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001 at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001 between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1, followed by running with uncushioned minimalist shoes (62.9%, cushioned minimalist (88.6% and standard shoes (94.3%. Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.
New Constraints on the running-mass inflation model
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro
2002-01-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...
Black hole constraints on the running-mass inflation model
Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R
2000-01-01
The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...
The running-mass inflation model and WMAP
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
2004-01-01
We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...
Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid
2015-01-01
Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1)) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1)) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (prunning velocities. Posthoc pairwise comparisons showed significant differences (prunning barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1)), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.
Brewer, Jeffrey David
The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.
Running of radiative neutrino masses: the scotogenic model — revisited
Energy Technology Data Exchange (ETDEWEB)
Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)
2015-11-23
A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.
Comparing internal and external run-time coupling of CFD and building energy simulation software
Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.
2004-01-01
This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external
Rossetti, Manuel D
2015-01-01
Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als
3D Finite Element Simulation of Micro End-Milling by Considering the Effect of Tool Run-Out
DEFF Research Database (Denmark)
Davoudinejad, Ali; Tosello, Guido; Parenti, Paolo
2017-01-01
Understanding the micro milling phenomena involved in the process is critical and difficult through physical experiments. This study presents a 3D finite element modeling (3D FEM) approach for the micro end-milling process on Al6082-T6. The proposed model employs a Lagrangian explicit finite...... element formulation to perform coupled thermo-mechanical transient analyses. FE simulations were performed at different cutting conditions to obtain realistic numerical predictions of chip formation, temperature distribution, and cutting forces by considering the effect of tool run-out in the model....... The predicted results of the model, involving the run-out influence, showed a good correlation with experimental chip formation and the signal shape of cutting forces....
International Nuclear Information System (INIS)
Gupta, M.K.
1994-06-01
The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building
The effective Standard Model after LHC Run I
International Nuclear Information System (INIS)
Ellis, John; Sanz, Verónica; You, Tevong
2015-01-01
We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard S,T formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run 1. We illustrate the combined constraints with the example of the two-Higgs doublet model.
The Effective Standard Model after LHC Run I
Ellis, John; You, Tevong
2015-01-01
We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard $S,T$ formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run~1. We illustrate the combined constraints with the example of the two-Higgs doublet model.
New constraints on the running-mass inflation model
International Nuclear Information System (INIS)
Covi, L.; Lyth, D.H.; Melchiorri, A.
2002-10-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)
Running scenarios using the Waste Tank Safety and Operations Hanford Site model
International Nuclear Information System (INIS)
Stahlman, E.J.
1995-11-01
Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples
Statistics for long irregular wave run-up on a plane beach from direct numerical simulations
Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys
2017-04-01
Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Modelling the long-run supply of coal
International Nuclear Information System (INIS)
Steenblik, R.P.
1992-01-01
There are many issues facing policy-makers in the fields of energy and the environment that require knowledge of coal supply and cost. Such questions arise in relation to decisions concerning, for example, the discontinuation of subsidies, or the effects of new environmental laws. The very complexity of these questions makes them suitable for analysis by models. Indeed, models have been used for analysing the behaviour of coal markets and the effects of public policies on them for many years. For estimating short-term responses econometric models are the most suitable. For estimating the supply of coal over the longer term, however - i.e., coal that would come from mines as yet not developed - depletion has to be taken into account. Underlying the normal supply curve relating cost to the rate of production is a curve that increases with cumulative production - what mineral economists refer to as the potential supply curve. To derive such a curve requires at some point in the analysis using process-oriented modelling techniques. Because coal supply curves can convey so succinctly information about the resource's long-run supply potential and costs, they have been influential in several major public debates on energy policy. And, within the coal industry itself, they have proved to be powerful tools for undertaking market research and long-range planning. The purpose of this paper is to describe in brief the various approaches that have been used to model long-run coal supply, to highlight their strengths, and to identify areas in which further progress is needed. (author)
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T
2015-01-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...
Simulation of accelerated strip cooling on the hot rolling mill run-out roller table
International Nuclear Information System (INIS)
Muhin, U.; Belskij, S.; Makarov, E.; Koinov, T.
2013-01-01
Full text: A mathematical model of the thermal state of the metal on the run-out roller table of a continuous wide hot-strip mill is presented. The mathematical model takes into account the heat generation during the polymorphic γ → α transformation of super cooled austenite phase and the influence of chemical composition on the physical properties of the steel. The model allows the calculation of modes of accelerated cooling of strips on the run-out roller table of a continuous wide hot strip mill. Winding temperature calculation error does not exceed 20 °C for 98.5 % of the strips from low-carbon and low-alloyed steels. key words: hot rolled, wide-strip, accelerated cooling, run-out roller table, polymorphic transformation, mathematical modeling
Simulation of accelerated strip cooling on the hot rolling mill run-out roller table
Directory of Open Access Journals (Sweden)
E.Makarov
2016-07-01
Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels
Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster
International Nuclear Information System (INIS)
Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg
2010-01-01
The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)
Lee, Changju; So, Jaehyun Jason; Ma, Jiaqi
2018-01-02
The conflicts among motorists entering a signalized intersection with the red light indication have become a national safety issue. Because of its sensitivity, efforts have been made to investigate the possible causes and effectiveness of countermeasures using comparison sites and/or before-and-after studies. Nevertheless, these approaches are ineffective when comparison sites cannot be found, or crash data sets are not readily available or not reliable for statistical analysis. Considering the random nature of red light running (RLR) crashes, an inventive approach regardless of data availability is necessary to evaluate the effectiveness of each countermeasure face to face. The aims of this research are to (1) review erstwhile literature related to red light running and traffic safety models; (2) propose a practical methodology for evaluation of RLR countermeasures with a microscopic traffic simulation model and surrogate safety assessment model (SSAM); (3) apply the proposed methodology to actual signalized intersection in Virginia, with the most prevalent scenarios-increasing the yellow signal interval duration, installing an advance warning sign, and an RLR camera; and (4) analyze the relative effectiveness by RLR frequency and the number of conflicts (rear-end and crossing). All scenarios show a reduction in RLR frequency (-7.8, -45.5, and -52.4%, respectively), but only increasing the yellow signal interval duration results in a reduced total number of conflicts (-11.3%; a surrogate safety measure of possible RLR-related crashes). An RLR camera makes the greatest reduction (-60.9%) in crossing conflicts (a surrogate safety measure of possible angle crashes), whereas increasing the yellow signal interval duration results in only a 12.8% reduction of rear-end conflicts (a surrogate safety measure of possible rear-end crash). Although increasing the yellow signal interval duration is advantageous because this reduces the total conflicts (a possibility of total
Simulation in Complex Modelling
DEFF Research Database (Denmark)
Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin
2017-01-01
This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....
Scientific Modeling and simulations
Diaz de la Rubia, Tomás
2009-01-01
Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments
Computer Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Pronskikh, V. S. [Fermilab
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
Automated Simulation Model Generation
Huang, Y.
2013-01-01
One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become
Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M.; Partyka, Gary
2018-01-01
The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community. PMID:29674806
Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment
International Nuclear Information System (INIS)
D.A. Kalinich; K.P. Lee; J.A. McNeish
2005-01-01
A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model
2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION
Energy Technology Data Exchange (ETDEWEB)
Choi, A.
2014-05-08
Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard
Dynamical system approach to running Λ cosmological models
International Nuclear Information System (INIS)
Stachowski, Aleksander; Szydlowski, Marek
2016-01-01
We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g"α"β∇_α∇_βφ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)
The Run 2 ATLAS Analysis Event Data Model
SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A
2014-01-01
During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...
Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments
Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.
2018-01-01
The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.
Validation of simulation models
DEFF Research Database (Denmark)
Rehman, Muniza; Pedersen, Stig Andur
2012-01-01
In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...
Directory of Open Access Journals (Sweden)
Shuting Wan
2015-06-01
Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.
Hamner, Samuel R; Seth, Ajay; Steele, Katherine M; Delp, Scott L
2013-06-21
Recent advances in computational technology have dramatically increased the use of muscle-driven simulation to study accelerations produced by muscles during gait. Accelerations computed from muscle-driven simulations are sensitive to the model used to represent contact between the foot and ground. A foot-ground contact model must be able to calculate ground reaction forces and moments that are consistent with experimentally measured ground reaction forces and moments. We show here that a rolling constraint can model foot-ground contact and reproduce measured ground reaction forces and moments in an induced acceleration analysis of muscle-driven simulations of walking, running, and crouch gait. We also illustrate that a point constraint and a weld constraint used to model foot-ground contact in previous studies produce inaccurate reaction moments and lead to contradictory interpretations of muscle function. To enable others to use and test these different constraint types (i.e., rolling, point, and weld constraints) we have included them as part of an induced acceleration analysis in OpenSim, a freely-available biomechanics simulation package. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling Run Test Validity: A Meta-Analytic Approach
National Research Council Canada - National Science Library
Vickers, Ross
2002-01-01
.... This study utilized data from 166 samples (N = 5,757) to test the general hypothesis that differences in testing methods could account for the cross-situational variation in validity. Only runs >2 km...
Modelling of Muscle Force Distributions During Barefoot and Shod Running
Directory of Open Access Journals (Sweden)
Sinclair Jonathan
2015-09-01
Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn
2016-04-01
The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run
Energy Technology Data Exchange (ETDEWEB)
Hofbauer, Gerhard [ALPINE-ENERGIE Oesterreich GmbH, Linz (Austria); Hofbauer, Werner
2009-07-01
Using the software FLTG all planning steps for overhead contact lines can be carried out based on the parameters of the contact line type and the line data. Contact line supports and individual spans are presented graphically. The geometric interaction of pantograph and contact line can be simulated taking into account the pantograph type, its sway and the wind action. Thus, the suitability of a line for the interoperability of the transEuropean rail system can be demonstrated. (orig.)
International Nuclear Information System (INIS)
Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.
1983-09-01
On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example
PSH Transient Simulation Modeling
Energy Technology Data Exchange (ETDEWEB)
Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-12-21
PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.
Short Run and Long Run Reasons That Affect The Nauri Model in Turkey
Baydur, Cem Mehmet; Süslü, Bora
2015-01-01
In this article, the study is done with the view of Nauri model, Keynes and Classical doctrine. According to the Classical doctrine, while the ratio of technology and mark-up depends on data, the value balance of unemployment level is determined by the share that the employees get from the whole salary. Aggressive salary policy increases the natural unemployment level. According to Nauri, in the situations where the expectations of inflation is right, the natural unemployment level is determi...
Regional model simulations of New Zealand climate
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.
We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...
Short-run and Current Analysis Model in Statistics
Directory of Open Access Journals (Sweden)
Constantin Anghelache
2006-01-01
Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.
Short-run and Current Analysis Model in Statistics
Directory of Open Access Journals (Sweden)
Constantin Mitrut
2006-03-01
Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.
International Nuclear Information System (INIS)
Andrieu, B.; Ban, J.; Barrelet, E.; Bergstein, H.; Bernardi, G.; Besancon, M.; Binder, E.; Blume, H.; Borras, K.; Boudry, V.; Brasse, F.; Braunschweig, W.; Brisson, V.; Campbell, A.J.; Carli, T.; Colombo, M.; Coutures, C.; Cozzika, G.; David, M.; Delcourt, B.; DelBuono, L.; Devel, M.; Dingus, P.; Drescher, A.; Duboc, J.; Duenger, O.; Ebbinghaus, R.; Egli, S.; Ellis, N.N.; Feltesse, J.; Feng, Y.; Ferrarotto, F.; Flauger, W.; Flieser, M.; Gamerdinger, K.; Gayler, J.; Godfrey, L.; Goerlich, L.; Goldberg, M.; Graessler, R.; Greenshaw, T.; Greif, H.; Haguenauer, M.; Hajduk, L.; Hamon, O.; Hartz, P.; Haustein, V.; Haydar, R.; Hildesheim, W.; Huot, N.; Jabiol, M.A.; Jacholkowska, A.; Jaffre, M.; Jung, H.; Just, F.; Kiesling, C.; Kirchhoff, T.; Kole, F.; Korbel, V.; Korn, M.; Krasny, W.; Kubenka, J.P.; Kuester, H.; Kurzhoefer, J.; Kuznik, B.; Lander, R.; Laporte, J.F.; Lenhardt, U.; Loch, P.; Lueers, D.; Marks, J.; Martyniak, J.; Merz, T.; Naroska, B.; Nau, A.; Nguyen, H.K.; Niebergall, F.; Oberlack, H.; Obrock, U.; Ould-Saada, F.; Pascaud, C.; Pyo, H.B.; Rauschnabel, K.; Ribarics, P.; Rietz, M.; Royon, C.; Rusinov, V.; Sahlmann, N.; Sanchez, E.; Schacht, P.; Schleper, P.; Schlippe, W. von; Schmidt, C.; Schmidt, D.; Shekelyan, V.; Shooshtari, H.; Sirois, Y.; Staroba, P.; Steenbock, M.; Steiner, H.; Stella, B.; Straumann, U.; Turnau, J.; Tutas, J.; Urban, L.; Vallee, C.; Vecko, M.; Verrecchia, P.; Villet, G.; Vogel, E.; Wagener, A.; Wegener, D.; Wegner, A.; Wellisch, H.P.; Yiou, T.P.; Zacek, J.; Zeitnitz, Ch.; Zomer, F.
1993-01-01
We present results on calibration runs performed with pions at CERN SPS for different modules of the H1 liquid argon calorimeter which consists of an electromagnetic section with lead absorbers and a hadronic section with steel absorbers. The data cover an energy range from 3.7 to 205 GeV. Detailed comparisons of the data and simulation with GHEISHA 8 in the framework of GEANT 3.14 are presented. The measured pion induced shower profiles are well described by the simulation. The total signal of pions on an energy scale determined from electron measurements is reproduced to better than 3% in various module configurations. After application of weighting functions, determined from Monte Carlo data and needed to achieve compensation, the reconstructed measured energies agree with simulation to about 3%. The energies of hadronic showers are reconstructed with a resolution of about 50%/√E + 2%. This result is achieved by inclusion of signals from an iron streamer tube tail catcher behind the liquid argon stacks. (orig.)
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
2016-06-01
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
International Nuclear Information System (INIS)
Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T
2015-01-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.
2015-12-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Qi, Nathan R.
2018-01-01
High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500
Simulation - modeling - experiment
International Nuclear Information System (INIS)
2004-01-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis
Bradley, James R.
2012-01-01
This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.
Energy Technology Data Exchange (ETDEWEB)
Larsen, G.C.; Aagaard Madsen, H.; Larsen, T.J.; Troldborg, N.
2008-07-15
We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture a stochastic model of the downstream wake meandering is formulated. In addition to the kinematic formulation of the dynamics of the 'meandering frame of reference', models characterizing the mean wake deficit as well as the added wake turbulence, described in the meandering frame of reference, are an integrated part the DWM model complex. For design applications, the computational efficiency of wake deficit prediction is a key issue. A computationally low cost model is developed for this purpose. Likewise, the character of the added wake turbulence, generated by the up-stream turbine in the form of shed and trailed vorticity, has been approached by a simple semi-empirical model essentially based on an eddy viscosity philosophy. Contrary to previous attempts to model wake loading, the DWM approach opens for a unifying description in the sense that turbine power- and load aspects can be treated simultaneously. This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated modeling tool, the DWM methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjaereborg wind farm, have
A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks
Energy Technology Data Exchange (ETDEWEB)
Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir
2013-05-18
Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.
da Silva, Arlindo M.; Putman, William; Nattala, J.
2014-01-01
details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.
Biomolecular modelling and simulations
Karabencheva-Christova, Tatyana
2014-01-01
Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.
Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model
Directory of Open Access Journals (Sweden)
Yao Wu
2016-01-01
Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.
Integration of control and building performance simulation software by run-time coupling
Yahiaoui, A.; Hensen, J.L.M.; Soethout, L.L.
2003-01-01
This paper presents the background, approach and initial results of a project, which aims to achieve better integrated building and systems control modeling in building performance simulation by runtime coupling of distributed computer programs. This paper focuses on one of the essential steps
Validation of the simulator neutronics model
International Nuclear Information System (INIS)
Gregory, M.V.
1984-01-01
The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Energy Technology Data Exchange (ETDEWEB)
Ott, C.
2000-07-01
In run-off-river plants with low discharge and under head-control, changes of inflow lead to amplified changes of outflow. In this thesis a frequency-domain-based design-procedure is introduced, which allows to add an inflow-dependent signal to the head-controller of conventional combined head- and flow-controllers. This efficiently minimizes the discharge amplification. The non-linearity of the channel-reach is taken into consideration by adapting the settings of the controller to the actual discharge. The development of a time-domain-based program system, taking into account all nonlinearities of a run-off-river-plant, is described. Using different test-functions, the capability of the improved combined head- and flow-control can be demonstrated. In both the time- and the frequency-domain it is shown, that the quality of control is not influenced to a significant extent by the inevitable inaccuracies in the description of the channel-reach and in the measurement of the actual inflow and outflow. (orig.) [German] Die Arbeit bietet eine Loesung fuer das Problem, dass im Niedrigwasserbereich wasserstandsgeregelter Staustufen Zuflussaenderungen durch die Staustufe verstaerkt an den Unterlieger weitergegeben werden. Als Problemloesung wird ein frequenzbereichsgestuetztes Entwurfsverfahren vorgestellt, mit dem die gebraeuchliche OW-Q-Regelung um eine zuflussabhaengige Aufschaltung auf den Pegelregler erweitert werden kann. Zusammen mit der Aufschaltung des Zuflusses auf den Abflussregler wird damit die Durchflussverstaerkung deutlich reduziert. Die Nichtlinearitaet der Regelstrecke 'Stauraum' wird durch eine Parameteradaption an den Staustufendurchfluss beruecksichtigt. Weiterhin wird die Entwicklung eines Programmsystems zur nichtlinearen Simulation einer Staustufenkette im Zeitbereich beschrieben. Damit kann anhand verschiedener Lastfaelle die Leistungsfaehigkeit der verbesserten OW-Q-Regelung nachgewiesen werden. Es wird im Zeit- und Frequenzbereich
Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear
Directory of Open Access Journals (Sweden)
Michálek T.
2015-06-01
Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.
Khowailed, Iman Akef; Petrofsky, Jerrold; Lohman, Everett; Daher, Noha
2015-01-01
Background The aim of this study was to examine the effects of a 6-week training program of simulated barefoot running (SBR) on running kinetics in habitually shod (wearing shoes) female recreational runners. Material/Methods Twelve female runners age 25.7±3.4 years gradually increased running distance in Vibram FiveFingers minimal shoes over a 6-week period. The kinetic analysis of treadmill running at 10 Km/h was performed pre- and post-intervention in shod running, non-habituated SBR, and habituated SBR conditions. Spatiotemporal parameters, ground reaction force components, and electromyography (EMG) were measured in all conditions. Results Post-intervention data indicated a significant decrease across time in the habituation SBR for EMG activity of the tibialis anterior (TA) in the pre-activation and absorptive phase of running (Prunning, unhabituated SBR, and habituated SBR. Six weeks of SBR was associated with a significant decrease in the loading rates and impact forces. Additionally, SBR significantly decrease the stride length, step duration, and flight time, and stride frequency was significantly higher compared to shod running. Conclusions The findings of this study indicate that changes in motor patterns in previously habitually shod runners are possible and can be accomplished within 6 weeks. Non-habituation SBR did not show a significant neuromuscular adaptation in the EMG activity of TA and GAS as manifested after 6 weeks of habituated SBR. PMID:26166443
Energy Technology Data Exchange (ETDEWEB)
Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)
2012-06-15
The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → ev_{e} channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c^{2} and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c^{2}. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; Liu Crouch, Feifei; Jacob, Robert L.; Moyer, Elisabeth J.
2014-01-01
functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures
A conceptual framework to model long-run qualitative change in the energy system
Ebersberger, Bernd
2004-01-01
A conceptual framework to model long-run qualitative change in the energy system / A. Pyka, B. Ebersberger, H. Hanusch. - In: Evolution and economic complexity / ed. J. Stanley Metcalfe ... - Cheltenham [u.a.] : Elgar, 2004. - S. 191-213
Advanced training simulator models. Implementation and validation
International Nuclear Information System (INIS)
Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter
2008-01-01
Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)
Running and rotating: modelling the dynamics of migrating cell clusters
Copenhagen, Katherine; Gov, Nir; Gopinathan, Ajay
Collective motion of cells is a common occurrence in many biological systems, including tissue development and repair, and tumor formation. Recent experiments have shown cells form clusters in a chemical gradient, which display three different phases of motion: translational, rotational, and random. We present a model for cell clusters based loosely on other models seen in the literature that involves a Vicsek-like alignment as well as physical collisions and adhesions between cells. With this model we show that a mechanism for driving rotational motion in this kind of system is an increased motility of rim cells. Further, we examine the details of the relationship between rim and core cells, and find that the phases of the cluster as a whole are correlated with the creation and annihilation of topological defects in the tangential component of the velocity field.
Short-Run Asset Selection using a Logistic Model
Directory of Open Access Journals (Sweden)
Walter Gonçalves Junior
2011-06-01
Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.
A long run intertemporal model of the oil market with uncertainty and strategic interaction
International Nuclear Information System (INIS)
Lensberg, T.; Rasmussen, H.
1991-06-01
This paper describes a model of the long run price uncertainty in the oil market. The main feature of the model is that the uncertainty about OPEC's price strategy is assumed to be generated not by irrational behavior on the part of OPEC, but by uncertainty about OPEC's size and time preference. The control of OPEC's pricing decision is assumed to shift among a set of OPEC-types over time according to a stochastic process, with each type implementing that price strategy which best fits the interests of its supporters. The model is fully dynamic on the supply side in the sense that all oil producers are assumed to understand the working of OPEC and the oil market, in particular, the non-OPEC producers base their investment decisions on rational price expectations. On the demand side, we assume that the market insight is less developed on the average, and model it by means of a long run demand curve on current prices and a simple lag structure. The long run demand curve for crude oil is generated by a fairly detailed static long-run equilibrium model of the product markets. Preliminary experience with the model indicate that prices are likely to stay below 20 dollars in the foreseeable future, but that prices around 30 dollars may occur if the present long run time perspective of OPEC is abandoned in favor of a more short run one. 26 refs., 4 figs., 7 tabs
Modeling, simulation and optimization of bipedal walking
Berns, Karsten
2013-01-01
The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...
A virtual laboratory notebook for simulation models.
Winfield, A J
1998-01-01
In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.
Notes on modeling and simulation
Energy Technology Data Exchange (ETDEWEB)
Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.
Implementation of the ATLAS Run 2 event data model
Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter
2015-01-01
During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...
Implementation of the ATLAS Run 2 event data model
Buckley, A.; Eifert, T.; Elsing, M.; Gillberg, D.; Koeneke, K.; Krasznahorkay, A.; Moyse, E.; Nowak, M.; Snyder, S.; van Gemmeren, P.
2015-12-01
During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the ‘auxiliary store’). Rather than being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a ‘structure of arrays’ format, while the user still can access it as an ‘array of structures’. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user- defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This paper focuses on the design and implementation of the auxiliary store and its interaction with ROOT.
Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.
2010-12-01
Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of
Energy Technology Data Exchange (ETDEWEB)
1979-09-01
A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds.
International Nuclear Information System (INIS)
1979-09-01
A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds
mr. A C++ library for the matching and running of the Standard Model parameters
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna
2016-01-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.
mr. A C++ library for the matching and running of the Standard Model parameters
Energy Technology Data Exchange (ETDEWEB)
Kniehl, Bernd A.; Veretin, Oleg L. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Pikelner, Andrey F. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics
2016-01-15
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.
The natural oscillation of two types of ENSO events based on analyses of CMIP5 model control runs
Xu, Kang; Su, Jingzhi; Zhu, Congwen
2014-07-01
The eastern- and central-Pacific El Niño-Southern Oscillation (EP- and CP-ENSO) have been found to be dominant in the tropical Pacific Ocean, and are characterized by interannual and decadal oscillation, respectively. In the present study, we defined the EP- and CP-ENSO modes by singular value decomposition (SVD) between SST and sea level pressure (SLP) anomalous fields. We evaluated the natural features of these two types of ENSO modes as simulated by the pre-industrial control runs of 20 models involved in phase five of the Coupled Model Intercomparison Project (CMIP5). The results suggested that all the models show good skill in simulating the SST and SLP anomaly dipolar structures for the EP-ENSO mode, but only 12 exhibit good performance in simulating the tripolar CP-ENSO modes. Wavelet analysis suggested that the ensemble principal components in these 12 models exhibit an interannual and multi-decadal oscillation related to the EP- and CP-ENSO, respectively. Since there are no changes in external forcing in the pre-industrial control runs, such a result implies that the decadal oscillation of CP-ENSO is possibly a result of natural climate variability rather than external forcing.
Running exercise protects the capillaries in white matter in a rat model of depression.
Chen, Lin-Mu; Zhang, Ai-Pin; Wang, Fei-Fei; Tan, Chuan-Xue; Gao, Yuan; Huang, Chun-Xia; Zhang, Yi; Jiang, Lin; Zhou, Chun-Ni; Chao, Feng-Lei; Zhang, Lei; Tang, Yong
2016-12-01
Running has been shown to improve depressive symptoms when used as an adjunct to medication. However, the mechanisms underlying the antidepressant effects of running are not fully understood. Changes of capillaries in white matter have been discovered in clinical patients and depression model rats. Considering the important part of white matter in depression, running may cause capillary structural changes in white matter. Chronic unpredictable stress (CUS) rats were provided with a 4-week running exercise (from the fifth week to the eighth week) for 20 minutes each day for 5 consecutive days each week. Anhedonia was measured by a behavior test. Furthermore, capillary changes were investigated in the control group, the CUS/Standard group, and the CUS/Running group using stereological methods. The 4-week running increased sucrose consumption significantly in the CUS/Running group and had significant effects on the total volume, total length, and total surface area of the capillaries in the white matter of depression rats. These results demonstrated that exercise-induced protection of the capillaries in white matter might be one of the structural bases for the exercise-induced treatment of depression. It might provide important parameters for further study of the vascular mechanisms of depression and a new research direction for the development of clinical antidepressant means. J. Comp. Neurol. 524:3577-3586, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Simulation Model of a Transient
DEFF Research Database (Denmark)
Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte
2005-01-01
This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...
Cognitive models embedded in system simulation models
International Nuclear Information System (INIS)
Siegel, A.I.; Wolf, J.J.
1982-01-01
If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context
Towards a numerical run-out model for quick-clay slides
Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.
2015-04-01
quasi-three-dimensional codes with a choice of bed-friction laws. The findings of the simulations point strongly towards the need for a different modeling approach that incorporates the essential physical features of quick-clay slides. The major requirement is a realistic description of remolding. A two-layer model is needed to describe the non-sensitive topsoil that often is passively advected by the slide. In many cases, the topography is rather complex so that 3D or quasi-3D (depth-averaged) models are required for realistic modeling of flow heights and velocities. Finally, since many Norwegian quick-clay slides run-out in a fjord (and may generate a tsunami), it is also desirable to explicitly account for buoyancy and hydrodynamic drag.
General introduction to simulation models
DEFF Research Database (Denmark)
Hisham Beshara Halasa, Tariq; Boklund, Anette
2012-01-01
trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...
Simulation - modeling - experiment; Simulation - modelisation - experience
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-07-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
RMCgui: a new interface for the workflow associated with running Reverse Monte Carlo simulations
International Nuclear Information System (INIS)
Dove, Martin T; Rigg, Gary
2013-01-01
The Reverse Monte Carlo method enables construction and refinement of large atomic models of materials that are tuned to give best agreement with experimental data such as neutron and x-ray total scattering data, capturing both the average structure and fluctuations. The practical drawback with the current implementations of this approach is the relatively complex workflow required, from setting up the configuration and simulation details through to checking the final outputs and analysing the resultant configurations. In order to make this workflow more accessible to users, we have developed an end-to-end workflow wrapped within a graphical user interface—RMCgui—designed to make the Reverse Monte Carlo more widely accessible. (paper)
Andrada, Emanuel; Rode, Christian; Blickhan, Reinhard
2013-10-21
Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations. © 2013 Elsevier Ltd. All rights reserved.
Stabilising the global greenhouse. A simulation model
International Nuclear Information System (INIS)
Michaelis, P.
1993-01-01
This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecolocially determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically specified and adapted to a dynamic Gams-algorithm. By various simulation runs for the period of 1990 to 2110, the economics of greenhouse gas accumulation are explored. In particular, the long-run cost associated with the above stabilisation target are evaluated for three different policy scenarios: i) A comprehensive approach that covers all major greenhouse gases simultaneously, ii) a piecemeal approach that is limited to reducing CO 2 emissions, and iii) a ten-year moratorium that postpones abatement effort until new scientific evidence on the greenhouse effect will become available. Comparing the simulation results suggests that a piecemeal approach would considerably increase total cost, whereas a ten-year moratorium might be reasonable even if the probability of 'good news' is comparatively small. (orig.)
Quantitative assessment of changes in landslide risk using a regional scale run-out model
Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone
2015-04-01
The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors
Wheel-running in a transgenic mouse model of Alzheimer's disease: protection or symptom?
Richter, Helene; Ambrée, Oliver; Lewejohann, Lars; Herring, Arne; Keyvani, Kathy; Paulus, Werner; Palme, Rupert; Touma, Chadi; Schäbitz, Wolf-Rüdiger; Sachser, Norbert
2008-06-26
Several studies on both humans and animals reveal benefits of physical exercise on brain function and health. A previous study on TgCRND8 mice, a transgenic model of Alzheimer's disease, reported beneficial effects of premorbid onset of long-term access to a running wheel on spatial learning and plaque deposition. Our study investigated the effects of access to a running wheel after the onset of Abeta pathology on behavioural, endocrinological, and neuropathological parameters. From day 80 of age, the time when Abeta deposition becomes apparent, TgCRND8 and wildtype mice were kept with or without running wheel. Home cage behaviour was analysed and cognitive abilities regarding object recognition memory and spatial learning in the Barnes maze were assessed. Our results show that, in comparison to Wt mice, Tg mice were characterised by impaired object recognition memory and spatial learning, increased glucocorticoid levels, hyperactivity in the home cage and high levels of stereotypic behaviour. Access to a running wheel had no effects on cognitive or neuropathological parameters, but reduced the amount of stereotypic behaviour in transgenics significantly. Furthermore, wheel-running was inversely correlated with stereotypic behaviour, suggesting that wheel-running may have stereotypic qualities. In addition, wheel-running positively correlated with plaque burden. Thus, in a phase when plaques are already present in the brain, it may be symptomatic of brain pathology, rather than protective. Whether or not access to a running wheel has beneficial effects on Alzheimer-like pathology and symptoms may therefore strongly depend on the exact time when the wheel is provided during development of the disease.
ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL
Климак, М.С.; Войтко, С.В.
2016-01-01
Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics
Progress in modeling and simulation.
Kindler, E
1998-01-01
For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.
Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations
Sung, Christopher Teh Boon
2011-01-01
Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
FASTBUS simulation models in VHDL
International Nuclear Information System (INIS)
Appelquist, G.
1992-11-01
Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)
Model reduction for circuit simulation
Hinze, Michael; Maten, E Jan W Ter
2011-01-01
Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi
The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network
Directory of Open Access Journals (Sweden)
T. A. Mityushkina
2012-12-01
Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.
ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis
Czech Academy of Sciences Publication Activity Database
Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.
2016-01-01
Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Just Running Around: Some Reminiscences of Early Simulation/Gaming in the United Kingdom
van Ments, Morry
2011-01-01
The article begins with an abbreviated CV of the author and then recounts the formation of Society for the Advancement of Games and Simulation in Education and Training (SAGSET) and the early days of simulation and gaming in the United Kingdom. Four strands of elements of development are described together with the key events of the 1970s and…
Bot, G.P.A.
1989-01-01
A model is a representation of a real system to describe some properties i.e. internal factors of that system (out-puts) as function of some external factors (inputs). It is impossible to describe the relation between all internal factors (if even all internal factors could be defined) and all
International Nuclear Information System (INIS)
Bergstroem, Johannes; Ohlsson, Tommy; Zhang He
2011-01-01
We show that, in the low-scale type-I seesaw model, renormalization group running of neutrino parameters may lead to significant modifications of the leptonic mixing angles in view of so-called seesaw threshold effects. Especially, we derive analytical formulas for radiative corrections to neutrino parameters in crossing the different seesaw thresholds, and show that there may exist enhancement factors efficiently boosting the renormalization group running of the leptonic mixing angles. We find that, as a result of the seesaw threshold corrections to the leptonic mixing angles, various flavor symmetric mixing patterns (e.g., bi-maximal and tri-bimaximal mixing patterns) can be easily accommodated at relatively low energy scales, which is well within the reach of running and forthcoming experiments (e.g., the LHC).
Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab
2015-10-01
The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.
A rapid estimation of tsunami run-up based on finite fault models
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
Component and system simulation models for High Flux Isotope Reactor
International Nuclear Information System (INIS)
Sozer, A.
1989-08-01
Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano
2014-03-01
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
Human and avian running on uneven ground: a model-based comparison
Müller, R.; Birn-Jeffery, A. V.; Blum, Y.
2016-01-01
Birds and humans are successful bipedal runners, who have individually evolved bipedalism, but the extent of the similarities and differences of their bipedal locomotion is unknown. In turn, the anatomical differences of their locomotor systems complicate direct comparisons. However, a simplifying mechanical model, such as the conservative spring–mass model, can be used to describe both avian and human running and thus, provides a way to compare the locomotor strategies that birds and humans ...
Sludge batch 9 simulant runs using the nitric-glycolic acid flowsheet
Energy Technology Data Exchange (ETDEWEB)
Lambert, D. P. [Savannah River Site (SRS), Aiken, SC (United States); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States); Brandenburg, C. H. [Savannah River Site (SRS), Aiken, SC (United States); Luther, M. C. [Savannah River Site (SRS), Aiken, SC (United States); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States); Woodham, W. H. [Savannah River Site (SRS), Aiken, SC (United States)
2016-11-01
Testing was completed to develop a Sludge Batch 9 (SB9) nitric-glycolic acid chemical process flowsheet for the Defense Waste Processing Facility’s (DWPF) Chemical Process Cell (CPC). CPC simulations were completed using SB9 sludge simulant, Strip Effluent Feed Tank (SEFT) simulant and Precipitate Reactor Feed Tank (PRFT) simulant. Ten sludge-only Sludge Receipt and Adjustment Tank (SRAT) cycles and four SRAT/Slurry Mix Evaporator (SME) cycles, and one actual SB9 sludge (SRAT/SME cycle) were completed. As has been demonstrated in over 100 simulations, the replacement of formic acid with glycolic acid virtually eliminates the CPC’s largest flammability hazards, hydrogen and ammonia. Recommended processing conditions are summarized in section 3.5.1. Testing demonstrated that the interim chemistry and Reduction/Oxidation (REDOX) equations are sufficient to predict the composition of DWPF SRAT product and SME product. Additional reports will finalize the chemistry and REDOX equations. Additional testing developed an antifoam strategy to minimize the hexamethyldisiloxane (HMDSO) peak at boiling, while controlling foam based on testing with simulant and actual waste. Implementation of the nitric-glycolic acid flowsheet in DWPF is recommended. This flowsheet not only eliminates the hydrogen and ammonia hazards but will lead to shorter processing times, higher elemental mercury recovery, and more concentrated SRAT and SME products. The steady pH profile is expected to provide flexibility in processing the high volume of strip effluent expected once the Salt Waste Processing Facility starts up.
Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph
2016-01-01
We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...
Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs
Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.
2009-01-01
Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962
Vehicle dynamics modeling and simulation
Schramm, Dieter; Bardini, Roberto
2014-01-01
The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.
Numerical simulation of Higgs models
International Nuclear Information System (INIS)
Jaster, A.
1995-10-01
The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)
NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System
Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.
Stochastic models: theory and simulation.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Plasma modelling and numerical simulation
International Nuclear Information System (INIS)
Van Dijk, J; Kroesen, G M W; Bogaerts, A
2009-01-01
Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)
The long-run forecasting of energy prices using the model of shifting trend
International Nuclear Information System (INIS)
Radchenko, Stanislav
2005-01-01
Developing models for accurate long-term energy price forecasting is an important problem because these forecasts should be useful in determining both supply and demand of energy. On the supply side, long-term forecasts determine investment decisions of energy-related companies. On the demand side, investments in physical capital and durable goods depend on price forecasts of a particular energy type. Forecasting long-run rend movements in energy prices is very important on the macroeconomic level for several developing countries because energy prices have large impacts on their real output, the balance of payments, fiscal policy, etc. Pindyck (1999) argues that the dynamics of real energy prices is mean-reverting to trend lines with slopes and levels that are shifting unpredictably over time. The hypothesis of shifting long-term trend lines was statistically tested by Benard et al. (2004). The authors find statistically significant instabilities for coal and natural gas prices. I continue the research of energy prices in the framework of continuously shifting levels and slopes of trend lines started by Pindyck (1999). The examined model offers both parsimonious approach and perspective on the developments in energy markets. Using the model of depletable resource production, Pindyck (1999) argued that the forecast of energy prices in the model is based on the long-run total marginal cost. Because the model of a shifting trend is based on the competitive behavior, one may examine deviations of oil producers from the competitive behavior by studying the difference between actual prices and long-term forecasts. To construct the long-run forecasts (10-year-ahead and 15-year-ahead) of energy prices, I modify the univariate shifting trends model of Pindyck (1999). I relax some assumptions on model parameters, the assumption of white noise error term, and propose a new Bayesian approach utilizing a Gibbs sampling algorithm to estimate the model with autocorrelation. To
Biosensors for EVA: Muscle Oxygen and pH During Walking, Running and Simulated Reduced Gravity
Lee, S. M. C.; Ellerby, G.; Scott, P.; Stroud, L.; Norcross, J.; Pesholov, B.; Zou, F.; Gernhardt, M.; Soller, B.
2009-01-01
During lunar excursions in the EVA suit, real-time measurement of metabolic rate is required to manage consumables and guide activities to ensure safe return to the base. Metabolic rate, or oxygen consumption (VO2), is normally measured from pulmonary parameters but cannot be determined with standard techniques in the oxygen-rich environment of a spacesuit. Our group developed novel near infrared spectroscopic (NIRS) methods to calculate muscle oxygen saturation (SmO2), hematocrit, and pH, and we recently demonstrated that we can use our NIRS sensor to measure VO2 on the leg during cycling. Our NSBRI-funded project is looking to extend this methodology to examine activities which more appropriately represent EVA activities, such as walking and running and to better understand factors that determine the metabolic cost of exercise in both normal and lunar gravity. Our 4 year project specifically addresses risk: ExMC 4.18: Lack of adequate biomedical monitoring capability for Constellation EVA Suits and EPSP risk: Risk of compromised EVA performance and crew health due to inadequate EVA suit systems.
Model for Simulation Atmospheric Turbulence
DEFF Research Database (Denmark)
Lundtang Petersen, Erik
1976-01-01
A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....
Constraints on running vacuum model with H ( z ) and f σ{sub 8}
Energy Technology Data Exchange (ETDEWEB)
Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Yin, Lu, E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: yinlumail@foxmail.com [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China)
2017-08-01
We examine the running vacuum model with Λ ( H ) = 3 ν H {sup 2} + Λ{sub 0}, where ν is the model parameter and Λ{sub 0} is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H ( z ) and weighted linear growth f ( z )σ{sub 8}( z ) measurements, we find that ν=(1.37{sup +0.72}{sub −0.95})× 10{sup −4} with the best fitted χ{sup 2} value slightly smaller than that in the ΛCDM model.
International Nuclear Information System (INIS)
Amend, Katharina; Klein, Markus
2017-01-01
The paper presents a three-dimensional numerical simulation for water running down inclined surfaces using OpenFOAM. This research project aims at developing a CFD model to describe the run down behavior of liquids and the resulting wash down of fission products on surfaces in the reactor containment. An empirical contact angle model with wetted history is introduced as well as a filtered randomized initial contact angle field. Simulation results are in good agreement with the experiments. Experimental Investigation on Passive.
Energy Technology Data Exchange (ETDEWEB)
Amend, Katharina; Klein, Markus [Univ. der Bundeswehr Muenchen, Neubiberg (Germany). Inst. for Numerical Methods in Aerospace Engineering
2017-07-15
The paper presents a three-dimensional numerical simulation for water running down inclined surfaces using OpenFOAM. This research project aims at developing a CFD model to describe the run down behavior of liquids and the resulting wash down of fission products on surfaces in the reactor containment. An empirical contact angle model with wetted history is introduced as well as a filtered randomized initial contact angle field. Simulation results are in good agreement with the experiments. Experimental Investigation on Passive.
A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model
Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang
2017-05-01
There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.
Validation process of simulation model
International Nuclear Information System (INIS)
San Isidro, M. J.
1998-01-01
It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs
Modeling and Simulation for Safeguards
International Nuclear Information System (INIS)
Swinhoe, Martyn T.
2012-01-01
The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.
Modeling and Simulation of Nanoindentation
Huang, Sixie; Zhou, Caizhi
2017-11-01
Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.
Yan, Xuedong; Liu, Yang; Xu, Yongcun
2015-01-01
Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.
arXiv Simulation Study of an LWFA-based Electron Injector for AWAKE Run 2
Williamson, B.; Doebert, S.; Karsch, S.; Muggli, P.
The AWAKE experiment aims to demonstrate preservation of injected electron beam quality during acceleration in proton-driven plasma waves. The short bunch duration required to correctly load the wakefield is challenging to meet with the current electron injector system, given the space available to the beamline. An LWFA readily provides short-duration electron beams with sufficient charge from a compact design, and provides a scalable option for future electron acceleration experiments at AWAKE. Simulations of a shock-front injected LWFA demonstrate a 43 TW laser system would be sufficient to produce the required charge over a range of energies beyond 100 MeV. LWFA beams typically have high peak current and large divergence on exiting their native plasmas, and optimisation of bunch parameters before injection into the proton-driven wakefields is required. Compact beam transport solutions are discussed.
The 2017 Xe run at CERN Linac3: measurements and beam dynamics simulations
Benedetti, Stefano; Kuchler, Detlef; Lombardi, Alessandra; Wenander, Fredrik John Carl; Toivanen, Ville Aleksi; CERN. Geneva. ATS Department
2018-01-01
At CERN quark-gluon plasma and ﬁxed target ion experiments are performed thanks to the Heavy-ion Facility, composed by diﬀerent accelerators. The starting point is CERN Linac3, which delivers 4.2 MeV/u ion beams to the Low Energy Ion Ring (LEIR). In 2017 Linac3 accelerated Xe instead of the most usual Pb. Machine development (MD) time was allocated to adapt the accelerator to the new ion species. This article summarizes the measurements performed during the MD time allocated to characterize the line from the source to the ﬁltering section. A parallel eﬀort was devoted to match those measurements to the beam dynamics simulations, and the second part of the article highlights the results achieved in this regard. Thanks to the improved understanding of the machine critical areas, a list of possible improvements is proposed at the end.
Assessment of Molecular Modeling & Simulation
Energy Technology Data Exchange (ETDEWEB)
None
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.
Houghton, Laurence A; Dawson, Brian T; Rubenson, Jonas
2013-04-01
The aim of this study was to determine whether intermittent shuttle running times (during a prolonged, simulated cricket batting innings) and Achilles tendon properties were affected by 8 weeks of plyometric training (PLYO, n = 7) or normal preseason (control [CON], n = 8). Turn (5-0-5-m agility) and 5-m sprint times were assessed using timing gates. Achilles tendon properties were determined using dynamometry, ultrasonography, and musculoskeletal geometry. Countermovement and squat jump heights were also assessed before and after training. Mean 5-0-5-m turn time did not significantly change in PLYO or CON (pre vs. post: 2.25 ± 0.08 vs. 2.22 ± 0.07 and 2.26 ± 0.06 vs. 2.25 ± 0.08 seconds, respectively). Mean 5-m sprint time did not significantly change in PLYO or CON (pre vs. post: 0.85 ± 0.02 vs. 0.84 ± 0.02 and 0.85 ± 0.03 vs. 0.85 ± 0.02 seconds, respectively). However, inferences from the smallest worthwhile change suggested that PLYO had a 51-72% chance of positive effects but only 6-15% chance of detrimental effects on shuttle running times. Jump heights only increased in PLYO (9.1-11.0%, p force, stiffness, elastic energy, strain, modulus) did not change in PLYO or CON. However, Achilles tendon cross-sectional area increased in PLYO (pre vs. post: 70 ± 7 vs. 79 ± 8 mm, p 0.050). In conclusion, plyometric training had possible benefits on intermittent shuttle running times and improved jump performance. Also, plyometric training increased tendon cross-sectional area, but further investigation is required to determine whether this translates to decreased injury risk.
NRTA simulation by modeling PFPF
International Nuclear Information System (INIS)
Asano, Takashi; Fujiwara, Shigeo; Takahashi, Saburo; Shibata, Junichi; Totsu, Noriko
2003-01-01
In PFPF, NRTA system has been applied since 1991. It has been confirmed by evaluating facility material accountancy data provided from operator in each IIV that a significant MUF was not generated. In case of throughput of PFPF scale, MUF can be evaluated with a sufficient detection probability by the present NRTA evaluation manner. However, by increasing of throughput, the uncertainty of material accountancy will increase, and the detection probability will decline. The relationship between increasing of throughput and declining of detection probability and the maximum throughput upon application of following measures with a sufficient detection probability were evaluated by simulation of NRTA system. This simulation was performed by modeling of PFPF. Measures for increasing detection probability are shown as follows. Shortening of the evaluation interval. Segmentation of evaluation area. This report shows the results of these simulations. (author)
SIMULATION RESULTS OF RUNNING THE AGS MMPS, BY STORING ENERGY IN CAPACITOR BANKS.
Energy Technology Data Exchange (ETDEWEB)
MARNERIS, I.
2006-09-01
The Brookhaven AGS is a strong focusing accelerator which is used to accelerate protons and various heavy ion species to equivalent maximum proton energy of 29 GeV. The AGS Main Magnet Power Supply (MMPS) is a thyristor control supply rated at 5500 Amps, +/-go00 Volts. The peak magnet power is 49.5 Mwatts. The power supply is fed from a motor/generator manufactured by Siemens. The motor is rated at 9 MW, input voltage 3 phase 13.8 KV 60 Hz. The generator is rated at 50 MVA its output voltage is 3 phase 7500 Volts. Thus the peak power requirements come from the stored energy in the rotor of the motor/generator. The rotor changes speed by about +/-2.5% of its nominal speed of 1200 Revolutions per Minute. The reason the power supply is powered by the Generator is that the local power company (LIPA) can not sustain power swings of +/- 50 MW in 0.5 sec if the power supply were to be interfaced directly with the AC lines. The Motor Generator is about 45 years old and Siemens is not manufacturing similar machines in the future. As a result we are looking at different ways of storing energy and being able to utilize it for our application. This paper will present simulations of a power supply where energy is stored in capacitor banks. The simulation program used is called PSIM Version 6.1. The control system of the power supply will also be presented. The average power from LIPA into the power supply will be kept constant during the pulsing of the magnets at +/-50 MW. The reactive power will also be kept constant below 1.5 MVAR. Waveforms will be presented.
Radius stabilization and brane running in the Randall-Sundrum type 1 model
International Nuclear Information System (INIS)
Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu
2004-01-01
We study the effective potential of a scalar field based on the 5D gauged supergravity for the Randall-Sundrum type one brane model in terms of the brane running method. The scalar couples to the brane such that the Bogomolnyi-Prasad-Sommerfield conditions are satisfied for the bulk configuration. The resulting effective potential implies that the interbrane distance is undetermined in this case, and we need a small Bogomolnyi-Prasad-Sommerfield breaking term on the brane to stabilize the interbrane distance at a finite length. We also discuss the relationship to the Goldberger-Wise model
Debaere, Sofie; Delecluse, Christophe; Aerenhouts, Dirk; Hagman, Friso; Jonkers, Ilse
2015-01-01
The aim of this study was to relate the contribution of lower limb joint moments and individual muscle forces to the body centre of mass (COM) vertical and horizontal acceleration during the initial two steps of sprint running. Start performance of seven well-trained sprinters was recorded using an optoelectronic motion analysis system and two force plates. Participant-specific torque-driven and muscle-driven simulations were conducted in OpenSim to quantify, respectively, the contributions of the individual joints and muscles to body propulsion and lift. The ankle is the major contributor to both actions during the first two stances, with an even larger contribution in the second compared to the first stance. Biarticular gastrocnemius is the main muscle contributor to propulsion in the second stance. The contribution of the hip and knee depends highly on the position of the athlete: During the first stance, where the athlete runs in a forward bending position, the knee contributes primarily to body lift and the hip contributes to propulsion and body lift. In conclusion, a small increase in ankle power generation seems to affect the body COM acceleration, whereas increases in hip and knee power generation tend to affect acceleration less.
Repository simulation model: Final report
International Nuclear Information System (INIS)
1988-03-01
This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs
Weigel, Martin
2011-09-01
Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.
Standard for Models and Simulations
Steele, Martin J.
2016-01-01
This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.
Energy Technology Data Exchange (ETDEWEB)
Nolan, M.; Lamont, A.; Chang, L.
1995-12-12
This document describes the implementation of the Simulation Builder developed as part of the Enterprise Modeling and Simulation (EM&S) portion of the Demand Activated Manufacturing Architecture (DAMA) project. The Simulation Builder software allows users to develop simulation models using pre-defined modules from a library. The Simulation Builder provides the machinery to allow the modules to link together and communicate information during the simulation run. This report describes the basic capabilities and structure of the Simulation Builder to assist a user in reviewing and using the code. It also describes the basic steps to follow when developing modules to take advantage of the capabilities provided by the Simulation Builder. The Simulation Builder software is written in C++. The discussion in this report assumes a sound understanding of the C++ language. Although this report describes the steps to follow when using the Simulation Builder, it is not intended to be a tutorial for a user unfamiliar with C++.
Do downscaled general circulation models reliably simulate historical climatic conditions?
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2018-01-01
The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.
Spontaneous appetence for wheel-running: a model of dependency on physical activity in rat.
Ferreira, Anthony; Lamarque, Stéphanie; Boyer, Patrice; Perez-Diaz, Fernando; Jouvent, Roland; Cohen-Salmon, Charles
2006-12-01
According to human observations of a syndrome of physical activity dependence and its consequences, we tried to examine if running activity in a free activity paradigm, where rats had a free access to activity wheel, may present a valuable animal model for physical activity dependence and most generally to behavioral dependence. The pertinence of reactivity to novelty, a well-known pharmacological dependence predictor was also tested. Given the close linkage observed in human between physical activity and drugs use and abuse, the influence of free activity in activity wheels on reactivity to amphetamine injection and reactivity to novelty were also assessed. It appeared that (1) free access to wheel may be used as a valuable model for physical activity addiction, (2) two populations differing in activity amount also differed in dependence to wheel-running. (3) Reactivity to novelty did not appeared as a predictive factor for physical activity dependence (4) activity modified novelty reactivity and (5) subjects who exhibited a high appetence to wheel-running, presented a strong reactivity to amphetamine. These results propose a model of dependency on physical activity without any pharmacological intervention, and demonstrate the existence of individual differences in the development of this addiction. In addition, these data highlight the development of a likely vulnerability to pharmacological addiction after intense and sustained physical activity, as also described in man. This model could therefore prove pertinent for studying behavioral dependencies and the underlying neurobiological mechanisms. These results may influence the way psychiatrists view behavioral dependencies and phenomena such as doping in sport or addiction to sport itself.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Statistical 3D damage accumulation model for ion implant simulators
Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.
Statistical 3D damage accumulation model for ion implant simulators
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided
Modelling of thermalhydraulics and reactor physics in simulators
International Nuclear Information System (INIS)
Miettinen, J.
1994-01-01
The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)
A Network Contention Model for the Extreme-scale Simulator
Energy Technology Data Exchange (ETDEWEB)
Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL
2015-01-01
The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.
Lundstrom, Christopher John; Biltz, George R.; Snyder, Eric M.; Ingraham, Stacy Jean
2017-01-01
The purpose of this study was to compare metabolic variables during submaximal running as predictors of marathon performance. Running economy (RE) and respiratory exchange ratio (RER) data were gathered during a 30 min incremental treadmill run completed within 2 weeks prior to running a 42.2-km marathon. Paces during the treadmill run progressed every 5 min from 75-100% of 10-km race velocity. Variables at each stage were analyzed as predictors of relative marathon performance (RMP) in compe...
MACC regional multi-model ensemble simulations of birch pollen dispersion in Europe
Sofiev, M.; Berger, U.; Prank, M.; Vira, J.; Arteta, J.; Belmonte, J.; Bergmann, K.C.; Chéroux, F.; Elbern, H.; Friese, E.; Galan, C.; Gehrig, R.; Khvorostyanov, D.; Kranenburg, R.; Kumar, U.; Marécal, V.; Meleux, F.; Menut, L.; Pessi, A.M.; Robertson, L.; Ritenberga, O.; Rodinkova, V.; Saarto, A.; Segers, A.; Severova, E.; Sauliene, I.; Siljamo, P.; Steensen, B.M.; Teinemaa, E.; Thibaudon, M.; Peuch, V.H.
2015-01-01
This paper presents the first ensemble modelling experiment in relation to birch pollen in Europe. The seven-model European ensemble of MACC-ENS, tested in trial simulations over the flowering season of 2010, was run through the flowering season of 2013. The simulations have been compared with
Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees
2010-05-01
Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).
Finite element modelling of Plantar Fascia response during running on different surface types
Razak, A. H. A.; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Hashim, M. S. M.; Daud, R.
2017-10-01
Plantar fascia is a ligament found in human foot structure located beneath the skin of human foot that functioning to stabilize longitudinal arch of human foot during standing and normal gait. To perform direct experiment on plantar fascia seems very difficult since the structure located underneath the soft tissue. The aim of this study is to develop a finite element (FE) model of foot with plantar fascia and investigate the effect of the surface hardness on biomechanical response of plantar fascia during running. The plantar fascia model was developed using Solidworks 2015 according to the bone structure of foot model that was obtained from Turbosquid database. Boundary conditions were set out based on the data obtained from experiment of ground reaction force response during running on different surface hardness. The finite element analysis was performed using Ansys 14. The results found that the peak of stress and strain distribution were occur on the insertion of plantar fascia to bone especially on calcaneal area. Plantar fascia became stiffer with increment of Young’s modulus value and was able to resist more loads. Strain of plantar fascia was decreased when Young’s modulus increased with the same amount of loading.
Verifying and Validating Simulation Models
Energy Technology Data Exchange (ETDEWEB)
Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-23
This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.
Advances in Intelligent Modelling and Simulation Simulation Tools and Applications
Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek
2012-01-01
The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...
MODELLING, SIMULATING AND OPTIMIZING BOILERS
DEFF Research Database (Denmark)
Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels
2004-01-01
In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...
SEMI Modeling and Simulation Roadmap
Energy Technology Data Exchange (ETDEWEB)
Hermina, W.L.
2000-10-02
With the exponential growth in the power of computing hardware and software, modeling and simulation is becoming a key enabler for the rapid design of reliable Microsystems. One vision of the future microsystem design process would include the following primary software capabilities: (1) The development of 3D part design, through standard CAD packages, with automatic design rule checks that guarantee the manufacturability and performance of the microsystem. (2) Automatic mesh generation, for 3D parts as manufactured, that permits computational simulation of the process steps, and the performance and reliability analysis for the final microsystem. (3) Computer generated 2D layouts for process steps that utilize detailed process models to generate the layout and process parameter recipe required to achieve the desired 3D part. (4) Science-based computational tools that can simulate the process physics, and the coupled thermal, fluid, structural, solid mechanics, electromagnetic and material response governing the performance and reliability of the microsystem. (5) Visualization software that permits the rapid visualization of 3D parts including cross-sectional maps, performance and reliability analysis results, and process simulation results. In addition to these desired software capabilities, a desired computing infrastructure would include massively parallel computers that enable rapid high-fidelity analysis, coupled with networked compute servers that permit computing at a distance. We now discuss the individual computational components that are required to achieve this vision. There are three primary areas of focus: design capabilities, science-based capabilities and computing infrastructure. Within each of these areas, there are several key capability requirements.
Institute of Scientific and Technical Information of China (English)
张雅彬; 李伯虎; 柴旭东; 杨晨
2012-01-01
为使得云仿真平台能够支持仿真用户快速、高效、灵活地获得个性化仿真服务,基于虚拟化技术研究了云仿真运行环境动态构建技术,设计了基于虚拟化技术的云仿真运行环境动态构建模型,研究了面向多用户的、以仿真模型需求为依据的云仿真运行环境动态构建的三层映射算法.最后通过一个应用示例说明了基于虚拟化技术的云仿真运行环境动态构建技术的可行性和有效性.%In order to enable a cloud simulation platform (CSP) to support users obtaining individual simulation services quickly, effectively and neatly, the cloud simulation running environment dynamic building technology is researched based on virtualization technology. A virtualization-based cloud simulation running environment dynamic building model is designed and the multi-user oriented three-layer algorithm built dynamically by the cloud simulation running environment is researched according to the demand of simulation resources. Finally, an example is given to show the feasibility and effectiveness.
Photovoltaic array performance simulation models
Energy Technology Data Exchange (ETDEWEB)
Menicucci, D. F.
1986-09-15
The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.
A High-Speed Train Operation Plan Inspection Simulation Model
Directory of Open Access Journals (Sweden)
Yang Rui
2018-01-01
Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.
Simulated annealing model of acupuncture
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Operations planning simulation: Model study
1974-01-01
The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.
Directory of Open Access Journals (Sweden)
Bisri Mohammad
2017-12-01
Full Text Available This study intended to illustrate the distribution of surface run-off. The methodology was by using Kineros model (kinetic run-off and erosion model. This model is a part of AGWA program which is as the development of ESRI ArcView SIG software that is as a tool for analysing hydrological phenomena in research about watershed simulating the process of infiltration, run-off depth, and erosion in a watershed of small scale such as ≤100 km2. The procedures are as follow: to analyse the run-off depth in Brantas sub-watershed, Klojen District by using Kineros model based on the land use change due to the rainfall simulation with the return period of 2 years, 5 years, 10 years, and 25 years. Results show that the difference of land use affect the surface run-off or there is the correlation between land use and surface run-off depth. The maximum surface run-off depth in the year 2000 was 134.26 mm; in 2005 it was 139.36 mm; and in 2010 it was 142.76 mm. There was no significant difference between Kineros model and observation in field, the relative error was only 9.09%.
Minkowski space pion model inspired by lattice QCD running quark mass
Energy Technology Data Exchange (ETDEWEB)
Mello, Clayton S. [Instituto Tecnológico de Aeronáutica, DCTA, 12.228-900 São José dos Campos, SP (Brazil); Melo, J.P.B.C. de [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo, SP (Brazil); Frederico, T., E-mail: tobias@ita.br [Instituto Tecnológico de Aeronáutica, DCTA, 12.228-900 São José dos Campos, SP (Brazil)
2017-03-10
The pion structure in Minkowski space is described in terms of an analytic model of the Bethe–Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe–Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward–Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.
Minkowski space pion model inspired by lattice QCD running quark mass
Directory of Open Access Journals (Sweden)
Clayton S. Mello
2017-03-01
Full Text Available The pion structure in Minkowski space is described in terms of an analytic model of the Bethe–Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe–Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward–Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.
International Nuclear Information System (INIS)
Ribeiro, Rafael S.; Hermes, Christian J.L.
2014-01-01
In this study, the method of entropy generation minimization (i.e., design aimed at facilitating both heat, mass and fluid flows) is used to assess the evaporator design (aspect ratio and fin density) considering the thermodynamic losses due to heat and mass transfer, and viscous flow processes. A fully algebraic model was put forward to simulate the thermal-hydraulic behavior of tube-fin evaporator coils running under frosting conditions. The model predictions were validated against experimental data, showing a good agreement between calculated and measured counterparts. The optimization exercise has pointed out that high aspect ratio heat exchanger designs lead to lower entropy generation in cases of fixed cooling capacity and air flow rate constrained by the characteristic curve of the fan. - Highlights: • An algebraic model for frost accumulation on tube-fin heat exchangers was advanced. • Model predictions for cooling capacity and air flow rate were compared with experimental data, with errors within ±5% band. • Minimum entropy generation criterion was used to optimize the evaporator geometry. • Thermodynamic analysis led to slender designs for fixed cooling capacity and fan characteristics
Energy Technology Data Exchange (ETDEWEB)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.
A description of the FAMOUS (version XDBUA climate model and control run
Directory of Open Access Journals (Sweden)
A. Osprey
2008-12-01
Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Impulse pumping modelling and simulation
International Nuclear Information System (INIS)
Pierre, B; Gudmundsson, J S
2010-01-01
Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.
International Nuclear Information System (INIS)
Jonsson, Karin; Elert, Mark
2006-08-01
In this report, further investigations of the model concept for radionuclide transport in stream, developed in the SKB report TR-05-03 is presented. Especially three issues have been the focus of the model investigations. The first issue was to investigate the influence of assumed channel geometry on the simulation results. The second issue was to reconsider the applicability of the equation for the bed-load transport in the stream model, and finally the last issue was to investigate how the model discretisation will influence the simulation results. The simulations showed that there were relatively small differences in results when applying different cross-sections in the model. The inclusion of the exact shape of the cross-section in the model is therefore not crucial, however, if cross-sectional data exist, the overall shape of the cross-section should be used in the model formulation. This could e.g. be accomplished by using measured values of the stream width and depth in the middle of the stream and by assuming a triangular shape. The bed-load transport was in this study determined for different sediment characteristics which can be used as an order of magnitude estimation if no exact determinations of the bed-load are available. The difference in the calculated bed-load transport for the different materials was, however, found to be limited. The investigation of model discretisation showed that a fine model discretisation to account for numerical effects is probably not important for the performed simulations. However, it can be necessary for being able to account for different conditions along a stream. For example, the application of mean slopes instead of individual values in the different stream reaches can result in very different predicted concentrations
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Construction and simulation of a novel continuous traffic flow model
International Nuclear Information System (INIS)
Hwang, Yao-Hsin; Yu, Jui-Ling
2017-01-01
In this paper, we aim to propose a novel mathematical model for traffic flow and apply a newly developed characteristic particle method to solve the associate governing equations. As compared with the existing non-equilibrium higher-order traffic flow models, the present one is put forward to satisfy the following three conditions: 1.Preserve the equilibrium state in the smooth region. 2.Yield an anisotropic propagation of traffic flow information. 3.Expressed with a conservation law form for traffic momentum. These conditions will ensure a more practical simulation in traffic flow physics: The current traffic will not be influenced by the condition in the behind and result in unambiguous condition across a traffic shock. Through analyses of characteristics, stability condition and steady-state solution adherent to the equation system, it is shown that the proposed model actually conform to these conditions. Furthermore, this model can be cast into its characteristic form which, incorporated with the Rankine-Hugoniot relation, is appropriate to be simulated by the characteristic particle method to obtain accurate computational results. - Highlights: • The traffic model expressed with the momentum conservation law. • Traffic flow information propagate anisotropically and preserve the equilibrium state in the smooth region. • Computational particles of two families are invented to mimic forward-running and backward-running characteristics. • Formation of shocks will be naturally detected by the intersection of computational particles of same family. • A newly developed characteristic particle method is used to simulate traffic flow model equations.
Foguelman, Daniel Jacob; The ATLAS collaboration
2016-01-01
Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...
Foguelman, Daniel Jacob; The ATLAS collaboration
2016-01-01
Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...
Galaxy Alignments: Theory, Modelling & Simulations
Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais
2015-11-01
The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.
Implementation of angular response function modeling in SPECT simulations with GATE
International Nuclear Information System (INIS)
Descourt, P; Visvikis, D; Carlier, T; Bardies, M; Du, Y; Song, X; Frey, E C; Tsui, B M W; Buvat, I
2010-01-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)
Implementation of angular response function modeling in SPECT simulations with GATE
Energy Technology Data Exchange (ETDEWEB)
Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)
2010-05-07
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)
A comparison and update of direct kinematic-kinetic models of leg stiffness in human running.
Liew, Bernard X W; Morris, Susan; Masters, Ashleigh; Netto, Kevin
2017-11-07
Direct kinematic-kinetic modelling currently represents the "Gold-standard" in leg stiffness quantification during three-dimensional (3D) motion capture experiments. However, the medial-lateral components of ground reaction force and leg length have been neglected in current leg stiffness formulations. It is unknown if accounting for all 3D would alter healthy biologic estimates of leg stiffness, compared to present direct modelling methods. This study compared running leg stiffness derived from a new method (multiplanar method) which includes all three Cartesian axes, against current methods which either only include the vertical axis (line method) or only the plane of progression (uniplanar method). Twenty healthy female runners performed shod overground running at 5.0 m/s. Three-dimensional motion capture and synchronised in-ground force plates were used to track the change in length of the leg vector (hip joint centre to centre of pressure) and resultant projected ground reaction force. Leg stiffness was expressed as dimensionless units, as a percentage of an individual's bodyweight divided by standing leg length (BW/LL). Leg stiffness using the line method was larger than the uniplanar method by 15.6%BW/LL (P method by 24.2%BW/LL (P stiffness from the uniplanar method was larger than the multiplanar method by 8.5%BW/LL (6.5 kN/m) (P stiffness estimate with the multiplanar method. Given that limb movements typically occur in 3D, the new multiplanar method provides the most complete accounting of all force and length components in leg stiffness calculation. Copyright © 2017 Elsevier Ltd. All rights reserved.
cellGPU: Massively parallel simulations of dynamic vertex models
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
International Nuclear Information System (INIS)
Orban, Chris
2013-01-01
In setting up initial conditions for ensembles of cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to real-space statistics and allowing the DC mode (i.e. overdensity) to vary from box to box as it would in the real universe. As a stringent test of both approaches, I perform ensembles of simulations using power law and a ''powerlaw times a bump'' model inspired by baryon acoustic oscillations (BAO), exploiting the self-similarity of these initial conditions to quantify the accuracy of the matter-matter two-point correlation results. The real-space method, which was originally proposed by Pen 1997 [1] and implemented by Sirko 2005 [2], performed well in producing the expected self-similar behavior and corroborated the non-linear evolution of the BAO feature observed in conventional simulations, even in the strongly-clustered regime (σ 8 ∼>1). In revisiting the real-space method championed by [2], it was also noticed that this earlier study overlooked an important integral constraint correction to the correlation function in results from the conventional approach that can be important in ΛCDM simulations with L box ∼ −1 Gpc and on scales r∼>L box /10. Rectifying this issue shows that the fourier space and real space methods are about equally accurate and efficient for modeling the evolution and growth of the correlation function, contrary to previous claims. An appendix provides a useful independent-of-epoch analytic formula for estimating the importance of the integral constraint bias on correlation function measurements in ΛCDM simulations
THE MARK I BUSINESS SYSTEM SIMULATION MODEL
of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)
Topographic filtering simulation model for sediment source apportionment
Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin
2018-05-01
We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.
Beck, Owen N; Taboga, Paolo; Grabowski, Alena M
2017-07-01
Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with
Uterus models for use in virtual reality hysteroscopy simulators.
Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias
2009-05-01
Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Benchmark simulation models, quo vadis?
Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.
Simulation Model of Mobile Detection Systems
International Nuclear Information System (INIS)
Edmunds, T.; Faissol, D.; Yao, Y.
2009-01-01
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains
Simulation Model of Mobile Detection Systems
Energy Technology Data Exchange (ETDEWEB)
Edmunds, T; Faissol, D; Yao, Y
2009-01-27
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains
RG running in a minimal UED model in light of recent LHC Higgs mass bounds
International Nuclear Information System (INIS)
Blennow, Mattias; Melbéus, Henrik; Ohlsson, Tommy; Zhang, He
2012-01-01
We study how the recent ATLAS and CMS Higgs mass bounds affect the renormalization group running of the physical parameters in universal extra dimensions. Using the running of the Higgs self-coupling constant, we derive bounds on the cutoff scale of the extra-dimensional theory itself. We show that the running of physical parameters, such as the fermion masses and the CKM mixing matrix, is significantly restricted by these bounds. In particular, we find that the running of the gauge couplings cannot be sufficient to allow gauge unification at the cutoff scale.
Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models
CSIR Research Space (South Africa)
Kruger, FJ
1985-03-01
Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...
Directory of Open Access Journals (Sweden)
Seiji Shimomura
2018-06-01
Full Text Available We analyzed the influence of treadmill running on rheumatoid arthritis (RA joints using a collagen-induced arthritis (CIA rat model. Eight-week-old male Dark Agouti rats were randomly divided into four groups: The control group, treadmill group (30 min/day for 4 weeks from 10-weeks-old, CIA group (induced CIA at 8-weeks-old, and CIA + treadmill group. Destruction of the ankle joint was evaluated by histological analyses. Morphological changes of subchondral bone were analyzed by μ-CT. CIA treatment-induced synovial membrane invasion, articular cartilage destruction, and bone erosion. Treadmill running improved these changes. The synovial membrane in CIA rats produced a large amount of tumor necrosis factor-α and Connexin 43; production was significantly suppressed by treadmill running. On μ-CT of the talus, bone volume fraction (BV/TV was significantly decreased in the CIA group. Marrow star volume (MSV, an index of bone loss, was significantly increased. These changes were significantly improved by treadmill running. Bone destruction in the talus was significantly increased with CIA and was suppressed by treadmill running. On tartrate-resistant acid phosphate and alkaline phosphatase (TRAP/ALP staining, the number of osteoclasts around the pannus was decreased by treadmill running. These findings indicate that treadmill running in CIA rats inhibited synovial hyperplasia and joint destruction.
Directory of Open Access Journals (Sweden)
Linmu Chen
2017-11-01
Full Text Available Running exercise is an effective method to improve depressive symptoms when combined with drugs. However, the underlying mechanisms are not fully clear. Cerebral blood flow perfusion in depressed patients is significantly lower in the hippocampus. Physical activity can achieve cerebrovascular benefits. The purpose of this study was to evaluate the impacts of running exercise on capillaries in the hippocampal CA1 and dentate gyrus (DG regions. The chronic unpredictable stress (CUS depression model was used in this study. CUS rats were given 4 weeks of running exercise from the fifth week to the eighth week (20 min every day from Monday to Friday each week. The sucrose consumption test was used to measure anhedonia. Furthermore, stereological methods were used to investigate the capillary changes among the control group, CUS/Standard group and CUS/Running group. Sucrose consumption significantly increased in the CUS/Running group. Running exercise has positive effects on the capillaries parameters in the hippocampal CA1 and DG regions, such as the total volume, total length and total surface area. These results demonstrated that capillaries are protected by running exercise in the hippocampal CA1 and DG might be one of the structural bases for the exercise-induced treatment of depression-like behavior. These results suggest that drugs and behavior influence capillaries and may be considered as a new means for depression treatment in the future.
An introduction to enterprise modeling and simulation
Energy Technology Data Exchange (ETDEWEB)
Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group
1996-09-01
As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.
Simulation and Modeling Methodologies, Technologies and Applications
Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2014-01-01
This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model
Gawron, C.
An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.
Large Scale Model Test Investigation on Wave Run-Up in Irregular Waves at Slender Piles
DEFF Research Database (Denmark)
Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke
2013-01-01
An experimental large scale study on wave run-up generated loads on entrance platforms for offshore wind turbines was performed. The experiments were performed at GrosserWellenkanal (GWK), Forschungszentrum Küste (FZK) in Hannover, Germany. The present paper deals with the run-up heights determin...
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Simulation modeling and analysis with Arena
Altiok, Tayfur
2007-01-01
Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
Debris flow run off simulation and verification ‒ case study of Chen-You-Lan Watershed, Taiwan
Directory of Open Access Journals (Sweden)
M.-L. Lin
2005-01-01
Full Text Available In 1996 typhoon Herb struck the central Taiwan area, causing severe debris flow in many subwatersheds of the Chen-You-Lan river watershed. More severe cases of debris flow occurred following Chi-Chi earthquake, 1999. In order to identify the potentially affected area and its severity, the ability to simulate the flow route of debris is desirable. In this research numerical simulation of debris flow deposition process had been carried out using FLO-2D adopting Chui-Sue river watershed as the study area. Sensitivity study of parameters used in the numerical model was conducted and adjustments were made empirically. The micro-geomorphic database of Chui-Sue river watershed was generated and analyzed to understand the terrain variations caused by the debris flow. Based on the micro-geomorphic analysis, the debris deposition in the Chui-Sue river watershed in the downstream area, and the position and volume of debris deposition were determined. The simulated results appeared to agree fairly well with the results of micro-geomorphic study of the area when not affected by other inflow rivers, and the trends of debris distribution in the study area appeared to be fairly consistent.
Network Modeling and Simulation A Practical Perspective
Guizani, Mohsen; Khan, Bilal
2010-01-01
Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate
On lumped models for thermodynamic properties of simulated annealing problems
International Nuclear Information System (INIS)
Andresen, B.; Pedersen, J.M.; Salamon, P.; Hoffmann, K.H.; Mosegaard, K.; Nulton, J.
1987-01-01
The paper describes a new method for the estimation of thermodynamic properties for simulated annealing problems using data obtained during a simulated annealing run. The method works by estimating energy-to-energy transition probabilities and is well adapted to simulations such as simulated annealing, in which the system is never in equilibrium. (orig.)
Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing
Yang, Bo; Wu, Yan
2018-03-01
Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.
Modelling and simulation of a heat exchanger
Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.
1991-01-01
Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.
Modeling and simulation of large HVDC systems
Energy Technology Data Exchange (ETDEWEB)
Jin, H.; Sood, V.K.
1993-01-01
This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.
Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In
2017-07-01
Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.
Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling
Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.
2016-11-01
A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Choi, Eun-Young; Lee, Jeong; Heo, Dong Hyun; Lee, Sang Kwon; Jeon, Min Ku; Hong, Sun Seok; Kim, Sung-Wook; Kang, Hyun Woo; Jeon, Sang-Chae; Hur, Jin-Mok
2017-06-01
Ten electrolytic reduction or oxide reduction (OR) runs of a 0.6 kg scale-simulated oxide fuel in a Li2O-LiCl molten salt at 650 °C were conducted using metal anode shrouds. During this procedure, an anode shroud surrounds a platinum anode and discharges hot oxygen gas from the salt to outside of the OR apparatus, thereby preventing corrosion of the apparatus. In this study, a number of anode shrouds made of various metals were tested. Each metallic anode shroud consisted of a lower porous shroud for the salt phase and an upper nonporous shroud for the gas phase. A stainless steel (STS) wire mesh with five-ply layer was a material commonly used for the lower porous shroud for the OR runs. The metals tested for the upper nonporous shroud in the different OR runs are STS, nickel, and platinum- or silver-lined nickel. The lower porous shroud showed no significant damage during two consecutive OR runs, but exhibited signs of damage from three or more runs due to thermal stress. The upper nonporous shrouds made up of either platinum- or silver-lined nickel showed excellent corrosion resistance to hot oxygen gas while STS or nickel without any platinum or silver lining exhibited poor corrosion resistance.
Pearson, A.; Pizzuto, J. E.
2015-12-01
Previous work at run-of-river (ROR) dams in northern Delaware has shown that bedload supplied to ROR impoundments can be transported over the dam when impoundments remain unfilled. Transport is facilitated by high levels of sand in the impoundment that lowers the critical shear stresses for particle entrainment, and an inversely sloping sediment ramp connecting the impoundment bed (where the water depth is typically equal to the dam height) with the top of the dam (Pearson and Pizzuto, in press). We demonstrate with one-dimensional bed material transport modeling that bed material can move through impoundments and that equilibrium transport (i.e., a balance between supply to and export from the impoundment, with a constant bed elevation) is possible even when the bed elevation is below the top of the dam. Based on our field work and previous HEC-RAS modeling, we assess bed material transport capacity at the base of the sediment ramp (and ignore detailed processes carrying sediment up and ramp and over the dam). The hydraulics at the base of the ramp are computed using a weir equation, providing estimates of water depth, velocity, and friction, based on the discharge and sediment grain size distribution of the impoundment. Bedload transport rates are computed using the Wilcock-Crowe equation, and changes in the impoundment's bed elevation are determined by sediment continuity. Our results indicate that impoundments pass the gravel supplied from upstream with deep pools when gravel supply rate is low, gravel grain sizes are relatively small, sand supply is high, and discharge is high. Conversely, impoundments will tend to fill their pools when gravel supply rate is high, gravel grain sizes are relatively large, sand supply is low, and discharge is low. The rate of bedload supplied to an impoundment is the primary control on how fast equilibrium transport is reached, with discharge having almost no influence on the timing of equilibrium.
A Lookahead Behavior Model for Multi-Agent Hybrid Simulation
Directory of Open Access Journals (Sweden)
Mei Yang
2017-10-01
Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.
Directory of Open Access Journals (Sweden)
Xuedong Yan
2014-02-01
Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.
A new synoptic scale resolving global climate simulation using the Community Earth System Model
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
Modeling and Simulation of Low Voltage Arcs
Ghezzi, L.; Balestrero, A.
2010-01-01
Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical
Model improvements to simulate charging in SEM
Arat, K. T.; Klimpel, T.; Hagen, C. W.
2018-03-01
Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.
Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model
Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong
In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying
Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model
Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong
2016-01-01
In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying
Ngada, Narcisse
2015-06-15
The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.
Real-time model for simulating a tracked vehicle on deformable soils
Directory of Open Access Journals (Sweden)
Martin Meywerk
2016-05-01
Full Text Available Simulation is one possibility to gain insight into the behaviour of tracked vehicles on deformable soils. A lot of publications are known on this topic, but most of the simulations described there cannot be run in real-time. The ability to run a simulation in real-time is necessary for driving simulators. This article describes an approach for real-time simulation of a tracked vehicle on deformable soils. The components of the real-time model are as follows: a conventional wheeled vehicle simulated in the Multi Body System software TRUCKSim, a geometric description of landscape, a track model and an interaction model between track and deformable soils based on Bekker theory and Janosi–Hanamoto, on one hand, and between track and vehicle wheels, on the other hand. Landscape, track model, soil model and the interaction are implemented in MATLAB/Simulink. The details of the real-time model are described in this article, and a detailed description of the Multi Body System part is omitted. Simulations with the real-time model are compared to measurements and to a detailed Multi Body System–finite element method model of a tracked vehicle. An application of the real-time model in a driving simulator is presented, in which 13 drivers assess the comfort of a passive and an active suspension of a tracked vehicle.
International Nuclear Information System (INIS)
Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun
2014-01-01
We report on the development of a robot’s dynamic locomotion based on a template which fits the robot’s natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order ‘template’ in a more complex ‘anchor’, the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion. (paper)
Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum
Energy Technology Data Exchange (ETDEWEB)
Abercrombie, Daniel [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). et al.
2015-07-06
One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.
Czech Academy of Sciences Publication Activity Database
Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.
2014-01-01
Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014
Modeling and simulation of normal and hemiparetic gait
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
Whole-building Hygrothermal Simulation Model
DEFF Research Database (Denmark)
Rode, Carsten; Grau, Karl
2003-01-01
An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...
Search for non-standard model signatures in the WZ/ZZ final state at CDF run II
Energy Technology Data Exchange (ETDEWEB)
Norman, Matthew [Univ. of California, San Diego, CA (United States)
2009-01-01
This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb ^{-1} of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.
Search for non-standard model signatures in the WZ/ZZ final state at CDF Run II
International Nuclear Information System (INIS)
Norman, Matthew
2009-01-01
This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high (cflx s). Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.
van Valkenhoef, Gert; Tervonen, Tommi; Postmus, Douwe
2014-01-01
In our previous work published in this journal, we showed how the Hit-And-Run (HAR) procedure enables efficient sampling of criteria weights from a space formed by restricting a simplex with arbitrary linear inequality constraints. In this short communication, we note that the method for generating
Simulation modeling for the health care manager.
Kennedy, Michael H
2009-01-01
This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.
Protein Simulation Data in the Relational Model.
Simms, Andrew M; Daggett, Valerie
2012-10-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.
Matta, R.; Perotti, E.
2016-01-01
Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered
Modeling and simulation of blood collection systems.
Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier
2012-03-01
This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.
Modeling and Simulation of Matrix Converter
DEFF Research Database (Denmark)
Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede
2005-01-01
This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...
Modeling and simulation of dust behaviors behind a moving vehicle
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust
Simulation models for tokamak plasmas
International Nuclear Information System (INIS)
Dimits, A.M.; Cohen, B.I.
1992-01-01
Two developments in the nonlinear simulation of tokamak plasmas are described: (A) Simulation algorithms that use quasiballooning coordinates have been implemented in a 3D fluid code and a 3D partially linearized (Δf) particle code. In quasiballooning coordinates, one of the coordinate directions is closely aligned with that of the magnetic field, allowing both optimal use of the grid resolution for structures highly elongated along the magnetic field as well as implementation of the correct periodicity conditions with no discontinuities in the toroidal direction. (B) Progress on the implementation of a likeparticle collision operator suitable for use in partially linearized particle codes is reported. The binary collision approach is shown to be unusable for this purpose. The algorithm under development is a complete version of the test-particle plus source-field approach that was suggested and partially implemented by Xu and Rosenbluth
Richardson, A. D.; Nacp Interim Site Synthesis Participants
2010-12-01
Phenology represents a critical intersection point between organisms and their growth environment. It is for this reason that phenology is a sensitive and robust integrator of the biological impacts of year-to-year climate variability and longer-term climate change on natural systems. However, it is perhaps equally important that phenology, by controlling the seasonal activity of vegetation on the land surface, plays a fundamental role in regulating ecosystem processes, competitive interactions, and feedbacks to the climate system. Unfortunately, the phenological sub-models implemented in most state-of-the-art ecosystem models and land surface schemes are overly simplified. We quantified model errors in the representation of the seasonal cycles of leaf area index (LAI), gross ecosystem photosynthesis (GEP), and net ecosystem exchange of CO2. Our analysis was based on site-level model runs (14 different models) submitted to the North American Carbon Program (NACP) Interim Synthesis, and long-term measurements from 10 forested (5 evergreen conifer, 5 deciduous broadleaf) sites within the AmeriFlux and Fluxnet-Canada networks. Model predictions of the seasonality of LAI and GEP were unacceptable, particularly in spring, and especially for deciduous forests. This is despite an historical emphasis on deciduous forest phenology, and the perception that controls on spring phenology are better understood than autumn phenology. Errors of up to 25 days in predicting “spring onset” transition dates were common, and errors of up to 50 days were observed. For deciduous sites, virtually every model was biased towards spring onset being too early, and autumn senescence being too late. Thus, models predicted growing seasons that were far too long for deciduous forests. For most models, errors in the seasonal representation of deciduous forest LAI were highly correlated with errors in the seasonality of both GPP and NEE, indicating the importance of getting the underlying
Energy Technology Data Exchange (ETDEWEB)
Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.
Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade
2015-12-01
The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.
LHV predication models and LHV effect on the performance of CI engine running with biodiesel blends
International Nuclear Information System (INIS)
Tesfa, B.; Gu, F.; Mishra, R.; Ball, A.D.
2013-01-01
Highlights: • Lower heating values of neat biodiesel and its blends were measured experimentally. • Lower heating value prediction models were developed based on the density and viscosity values of the fuel. • The predication models were validated by measured values and previous models. • The prediction models were used to predict the lower heating value of 24 biodiesel feedstock types produced globally. • The effects of lower heating vale on brake specific fuel consumption and thermal efficiency were investigated. - Abstract: The heating value of fuel is one of its most important physical properties, and is used for the design and numerical simulation of combustion processes within internal combustion (IC) engines. Recently, there has been a significant increase in the use of dual fuel and blended fuels in compression ignition (CI) engines. Most of the blended fuels include biodiesel as one of the constituents and hence the objective of this study is to investigate the effect of biodiesel content to lower heating value (LHV) and to develop new LHV prediction models that correlate the LHV with biodiesel fraction, density and viscosity. Furthermore, this study also investigated the effects of the LHV on CI engines performance parameters experimentally. To achieve the above mentioned objectives density, viscosity and LHV of rapeseed oil biodiesel, corn oil biodiesel and waste oil biodiesel at different blend fraction values (B0, B5, B10, B20, B50, B75, and B100, where ‘B5’ denotes a blend of 5% biodiesel and 95% mineral diesel, etc.) were measured as per EN ISO 3675:1998, EN ISO 3104:1996 and DIN 51900 standards. The engine experimental work was conducted on a four-cylinder, four-stroke, direct injection (DI) and turbocharged diesel engine by using rapeseed oil and normal diesel blends. Based on the experimental results, models were developed which have the capability to predict the LHV corresponding to different fractions, densities and viscosities of
A model management system for combat simulation
Dolk, Daniel R.
1986-01-01
The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...
SAPS simulation with GITM/UCLA-RCM coupled model
Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.
2017-12-01
Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.
HVDC System Characteristics and Simulation Models
Energy Technology Data Exchange (ETDEWEB)
Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)
2001-07-01
This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.
Physically realistic modeling of maritime training simulation
Cieutat , Jean-Marc
2003-01-01
Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...
Software-Engineering Process Simulation (SEPS) model
Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.
1992-01-01
The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.
Systematic modelling and simulation of refrigeration systems
DEFF Research Database (Denmark)
Rasmussen, Bjarne D.; Jakobsen, Arne
1998-01-01
The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....
Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F
2017-11-01
There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system
CMB constraints on running non-Gaussianity
Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola
2017-01-01
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}
Deriving simulators for hybrid Chi models
Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.
2006-01-01
The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an
Modeling and simulation for RF system design
Frevert, Ronny; Jancke, Roland; Knöchel, Uwe; Schwarz, Peter; Kakerow, Ralf; Darianian, Mohsen
2005-01-01
Focusing on RF specific modeling and simulation methods, and system and circuit level descriptions, this work contains application-oriented training material. Accompanied by a CD- ROM, it combines the presentation of a mixed-signal design flow, an introduction into VHDL-AMS and Verilog-A, and the application of commercially available simulators.
Climate simulations for 1880-2003 with GISS modelE
International Nuclear Information System (INIS)
Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.
2007-01-01
We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)
Magnetosphere Modeling: From Cartoons to Simulations
Gombosi, T. I.
2017-12-01
Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Directory of Open Access Journals (Sweden)
Macide ÇİÇEK
2005-01-01
Full Text Available In this study, it is employed that Expectations-Augmented Philips Curve Model to investigate the link between inflation and unit labor costs, output gap (proxy for demand shocks, real exchange rate (proxy for supply shocks and price expectations for Turkey using monthly data from 2000:01 to 2004:12. The methodology employed in this paper uses unit root test, Johansen Cointegration Test to examine the existence of possible long run relationships among the variables included in the model and a single equation error correction model for the inflation equation estimated by OLS to examine the short run dynamics of inflation, respectively. It is find that in the long run, mark-up behaviour of output prices over unit labor costs is the main cause of inflation, real exchange rate has a rather big impact on reduced inflation and demand shocks don’t led to an increase in prices. The short run dynamics of the inflation equation indicate that supply shocks are the determinant of inflation in the short run. It is also find that exchange rate is the variable that trigger an inflation adjustment the most rapidly in the short run.
NUMERICAL SIMULATION AND MODELING OF UNSTEADY FLOW ...
African Journals Online (AJOL)
2014-06-30
Jun 30, 2014 ... objective of this study is to control the simulation of unsteady flows around structures. ... Aerospace, our results were in good agreement with experimental .... Two-Equation Eddy-Viscosity Turbulence Models for Engineering.
SEIR model simulation for Hepatitis B
Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah
2017-09-01
Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.
Maintenance Personnel Performance Simulation (MAPPS) model
International Nuclear Information System (INIS)
Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.
1984-01-01
A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
Turbine modelling for real time simulators
International Nuclear Information System (INIS)
Oliveira Barroso, A.C. de; Araujo Filho, F. de
1992-01-01
A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)
Theory, modeling, and simulation annual report, 1992
Energy Technology Data Exchange (ETDEWEB)
1993-05-01
This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.
Modeling and simulation with operator scaling
Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan
2010-01-01
Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...
Law, R. M.; Peters, W.; Roedenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S. -J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.
2008-01-01
[1] A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model
Law, R. M.; Peters, W.; RöDenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S.-J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.
2008-01-01
A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model
Modeling of magnetic particle suspensions for simulations
Satoh, Akira
2017-01-01
The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...
Water desalination price from recent performances: Modelling, simulation and analysis
International Nuclear Information System (INIS)
Metaiche, M.; Kettab, A.
2005-01-01
The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)
Modelling and Simulation of Wave Loads
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...
Modelling and Simulation of Wave Loads
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1985-01-01
velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...
Modeling and simulation of discrete event systems
Choi, Byoung Kyu
2013-01-01
Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on
Minimum-complexity helicopter simulation math model
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-II analysis model
FARRELL, Steven; The ATLAS collaboration
2015-01-01
The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This presentation will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.
Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model
FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin
2015-01-01
The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
Thermal unit availability modeling in a regional simulation model
International Nuclear Information System (INIS)
Yamayee, Z.A.; Port, J.; Robinett, W.
1983-01-01
The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented
Modelling and simulation of floods in alpine catchments equipped with complex hydropower schemes
Bieri, Martin; Schleiss, Anton; Frankhauser, A.
2010-01-01
The simulation of run-off in an alpine catchment area equipped with complex hydropower schemes is presented by the help of an especially developed tool, called Routing System, which can combine hydrological modelling and operation of hydraulic elements. In the hydrological forecasting tool tridimensional rainfall, temperature and evapotranspiration distributions are taken into account for simulating the dominant hydrological processes, as glacier melt, snow pack constitution and melt, soil in...
Plasma disruption modeling and simulation
International Nuclear Information System (INIS)
Hassanein, A.
1994-01-01
Disruptions in tokamak reactors are considered a limiting factor to successful operation and reliable design. The behavior of plasma-facing components during a disruption is critical to the overall integrity of the reactor. Erosion of plasma facing-material (PFM) surfaces due to thermal energy dump during the disruption can severely limit the lifetime of these components and thus diminish the economic feasibility of the reactor. A comprehensive understanding of the interplay of various physical processes during a disruption is essential for determining component lifetime and potentially improving the performance of such components. There are three principal stages in modeling the behavior of PFM during a disruption. Initially, the incident plasma particles will deposit their energy directly on the PFM surface, heating it to a very high temperature where ablation occurs. Models for plasma-material interactions have been developed and used to predict material thermal evolution during the disruption. Within a few microseconds after the start of the disruption, enough material is vaporized to intercept most of the incoming plasma particles. Models for plasma-vapor interactions are necessary to predict vapor cloud expansion and hydrodynamics. Continuous heating of the vapor cloud above the material surface by the incident plasma particles will excite, ionize, and cause vapor atoms to emit thermal radiation. Accurate models for radiation transport in the vapor are essential for calculating the net radiated flux to the material surface which determines the final erosion thickness and consequently component lifetime. A comprehensive model that takes into account various stages of plasma-material interaction has been developed and used to predict erosion rates during reactor disruption, as well during induced disruption in laboratory experiments
Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model
International Nuclear Information System (INIS)
Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.
2008-01-01
The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)
Modelling and simulating fire tube boiler performance
DEFF Research Database (Denmark)
Sørensen, K.; Condra, T.; Houbak, Niels
2003-01-01
A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....
Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum
Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto
2015-01-01
This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.
Bridging experiments, models and simulations
DEFF Research Database (Denmark)
Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca
2012-01-01
Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....
Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model
Segui, John S.; Jennings, Esther H.; Clare, Loren P.
2013-01-01
Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.
Simulation and analysis of a model dinoflagellate predator-prey system
Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.
2015-12-01
This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.
Hamlin, Michael J; Lizamore, Catherine A; Hopkins, Will G
2018-02-01
While adaptation to hypoxia at natural or simulated altitude has long been used with endurance athletes, it has only recently gained popularity for team-sport athletes. To analyse the effect of hypoxic interventions on high-intensity intermittent running performance in team-sport athletes. A systematic literature search of five journal databases was performed. Percent change in performance (distance covered) in the Yo-Yo intermittent recovery test (level 1 and level 2 were used without differentiation) in hypoxic (natural or simulated altitude) and control (sea level or normoxic placebo) groups was meta-analyzed with a mixed model. The modifying effects of study characteristics (type and dose of hypoxic exposure, training duration, post-altitude duration) were estimated with fixed effects, random effects allowed for repeated measurement within studies and residual real differences between studies, and the standard-error weighting factors were derived or imputed via standard deviations of change scores. Effects and their uncertainty were assessed with magnitude-based inference, with a smallest important improvement of 4% estimated via between-athlete standard deviations of performance at baseline. Ten studies qualified for inclusion, but two were excluded owing to small sample size and risk of publication bias. Hypoxic interventions occurred over a period of 7-28 days, and the range of total hypoxic exposure (in effective altitude-hours) was 4.5-33 km h in the intermittent-hypoxia studies and 180-710 km h in the live-high studies. There were 11 control and 15 experimental study-estimates in the final meta-analysis. Training effects were moderate and very likely beneficial in the control groups at 1 week (20 ± 14%, percent estimate, ± 90% confidence limits) and 4-week post-intervention (25 ± 23%). The intermittent and live-high hypoxic groups experienced additional likely beneficial gains at 1 week (13 ± 16%; 13 ± 15%) and 4-week post
Defining epidemics in computer simulation models: How do definitions influence conclusions?
Directory of Open Access Journals (Sweden)
Carolyn Orbann
2017-06-01
Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.
MODELLING, SIMULATING AND OPTIMIZING BOILERS
DEFF Research Database (Denmark)
Sørensen, K.; Condra, T.; Houbak, Niels
2003-01-01
, and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...
Up and running with AutoCAD 2014 2D and 3D drawing and modeling
Gindis, Elliot
2013-01-01
Get ""Up and Running"" with AutoCAD using Gindis's combination of step-by-step instruction, examples, and insightful explanations. The emphasis from the beginning is on core concepts and practical application of AutoCAD in architecture, engineering and design. Equally useful in instructor-led classroom training, self-study, or as a professional reference, the book is written with the user in mind by a long-time AutoCAD professional and instructor based on what works in the industry and the classroom. Strips away complexities, both real and perceived, and reduces AutoCAD t
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
Analytical system dynamics modeling and simulation
Fabien, Brian C
2008-01-01
This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.
Hybrid simulation models of production networks
Kouikoglou, Vassilis S
2001-01-01
This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.
Dynamic modeling and simulation of wind turbines
International Nuclear Information System (INIS)
Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.
2002-01-01
Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator
Landscape Modelling and Simulation Using Spatial Data
Directory of Open Access Journals (Sweden)
Amjed Naser Mohsin AL-Hameedawi
2017-08-01
Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Quantitative interface models for simulating microstructure evolution
International Nuclear Information System (INIS)
Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.
2004-01-01
To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys
A queuing model for road traffic simulation
International Nuclear Information System (INIS)
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-01-01
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme
Clock error models for simulation and estimation
International Nuclear Information System (INIS)
Meditch, J.S.
1981-10-01
Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction
Modeling and simulation goals and accomplishments
International Nuclear Information System (INIS)
Turinsky, P.
2013-01-01
The CASL (Consortium for Advanced Simulation of Light Water Reactors) mission is to develop and apply the Virtual Reactor simulator (VERA) to optimise nuclear power in terms of capital and operating costs, of nuclear waste production and of nuclear safety. An efficient and reliable virtual reactor simulator relies on 3-dimensional calculations, accurate physics models and code coupling. Advances in computer hardware, along with comparable advances in numerical solvers make the VERA project achievable. This series of slides details the VERA project and presents the specificities and performance of the codes involved in the project and ends by listing the computing needs
Crash simulation: an immersive learning model.
Wenham, John; Bennett, Paul; Gleeson, Wendy
2017-12-26
Far West New South Wales Local Emergency Management Committee runs an annual crash simulation exercise to assess the operational readiness of all local emergency services to coordinate and manage a multi-casualty exercise. Since 2009, the Broken Hill University Department of Rural Health (BHUDRH) has collaborated with the committee, enabling the inclusion of health students in this exercise. It is an immersive interprofessional learning experience that evaluates teamwork, communication and safe effective clinical trauma management outside the hospital setting. After 7 years of modifying and developing the exercise, we set out to evaluate its impact on the students' learning, and sought ethics approval from the University of Sydney for this study. At the start of this year's crash simulation, students were given information sheets and consent forms with regards to the research. Once formal debriefing had finished, the researchers conducted a semi-structured focus-group interview with the health students to gain insight into their experience and their perceived value of the training. Students also completed short-answer questionnaires, and the anonymised responses were analysed. Crash simulation … evaluates teamwork, communication and safe effective clinical trauma management IMPLICATIONS: Participants identified that this multidisciplinary learning opportunity in a pre-hospital mass casualty situation was of value to them. It has taken them outside of their usually protected hospital or primary care setting and tested their critical thinking and communication skills. We recommend this learning concept to other educational institutions. Further research will assess the learning value of the simulated event to the other agencies involved. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Dalheimer, Matthias Kalle
2006-01-01
The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.
C. Delaere
2013-01-01
Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...
Global ice volume variations through the last glacial cycle simulated by a 3-D ice-dynamical model
Bintanja, R.; Wal, R.S.W. van de; Oerlemans, J.
2002-01-01
A coupled ice sheet—ice shelf—bedrock model was run at 20km resolution to simulate the evolution of global ice cover during the last glacial cycle. The mass balance model uses monthly mean temperature and precipitation as input and incorporates the albedo—mass balance feedback. The model is forced
International Nuclear Information System (INIS)
Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.
1989-01-01
Results obtained on a strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers. (orig.)
International Nuclear Information System (INIS)
Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.
1989-01-01
Results obtained on strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers
Simulation Modeling of Software Development Processes
Calavaro, G. F.; Basili, V. R.; Iazeolla, G.
1996-01-01
A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.
Sarathy, Mani
2018-04-03
Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.
Sarathy, Mani; Atef, Nour; Alfazazi, Adamu; Badra, Jihad; Zhang, Yu; Tzanetakis, Tom; Pei, Yuanjiang
2018-01-01
Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.
Wave Run-Up on Offshore Wind Turbines
DEFF Research Database (Denmark)
Ramirez, Jorge Robert Rodriguez
to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with agentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...
Wave Run-Up on Offshore Wind Turbines
DEFF Research Database (Denmark)
Ramirez, Jorge Robert Rodriguez
to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with a gentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...
Analyzing Strategic Business Rules through Simulation Modeling
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
Reliable low precision simulations in land surface models
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
New exploration on TMSR: modelling and simulation
Energy Technology Data Exchange (ETDEWEB)
Si, S.; Chen, Q.; Bei, H.; Zhao, J., E-mail: ssy@snerdi.com.cn [Shanghai Nuclear Engineering Research & Design Inst., Shanghai (China)
2015-07-01
A tightly coupled multi-physics model for MSR (Molten Salt Reactor) system involving the reactor core and the rest of the primary loop has been developed and employed in an in-house developed computer code TANG-MSR. In this paper, the computer code is used to simulate the behavior of steady state operation and transient for our redesigned TMSR. The presented simulation results demonstrate that the models employed in TANG-MSR can capture major physics phenomena in MSR and the redesigned TMSR has excellent performance of safety and sustainability. (author)
Nuclear reactor core modelling in multifunctional simulators
International Nuclear Information System (INIS)
Puska, E.K.
1999-01-01
The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been
Nuclear reactor core modelling in multifunctional simulators
Energy Technology Data Exchange (ETDEWEB)
Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)
1999-06-01
The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been
Forecasting Lightning Threat using Cloud-resolving Model Simulations
McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.
2009-01-01
quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.
Tsunami Simulators in Physical Modelling - Concept to Practical Solutions
Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David
2017-04-01
Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami
Directory of Open Access Journals (Sweden)
Naziruddin Abdullah
2004-06-01
Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.
Kanban simulation model for production process optimization
Directory of Open Access Journals (Sweden)
Golchev Riste
2015-01-01
Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.
Vermont Yankee simulator BOP model upgrade
International Nuclear Information System (INIS)
Alejandro, R.; Udbinac, M.J.
2006-01-01
The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)
Langenbrunner, B.; Neelin, J.; Meyerson, J.
2011-12-01
The accurate representation of precipitation is a recurring issue in global climate models, especially in the tropics. Poor skill in modeling the variability and climate teleconnections associated with El Niño/Southern Oscillation (ENSO) also persisted in the latest Climate Model Intercomparison Project (CMIP) campaigns. Observed ENSO precipitation teleconnections provide a standard by which we can judge a given model's ability to reproduce precipitation and dynamic feedback processes originating in the tropical Pacific. Using CMIP3 Atmospheric Model Intercomparison Project (AMIP) runs as a baseline, we compare precipitation teleconnections between models and observations, and we evaluate these results against available CMIP5 historical and AMIP runs. Using AMIP simulations restricts evaluation to the atmospheric response, as sea surface temperatures (SSTs) in AMIP are prescribed by observations. We use a rank correlation between ENSO SST indices and precipitation to define teleconnections, since this method is robust to outliers and appropriate for non-Gaussian data. Spatial correlations of the modeled and observed teleconnections are then evaluated. We look at these correlations in regions of strong precipitation teleconnections, including equatorial S. America, the "horseshoe" region in the western tropical Pacific, and southern N. America. For each region and season, we create a "normalized projection" of a given model's teleconnection pattern onto that of the observations, a metric that assesses the quality of regional pattern simulations while rewarding signals of correct sign over the region. Comparing this to an area-averaged (i.e., more generous) metric suggests models do better when restrictions on exact spatial dependence are loosened and conservation constraints apply. Model fidelity in regional measures remains far from perfect, suggesting intrinsic issues with the models' regional sensitivities in moist processes.
Mathematical model simulation of a diesel spill in the Potomac River
International Nuclear Information System (INIS)
Feng, S.S.; Nicolette, J.P.; Markarian, R.K.
1995-01-01
A mathematical modeling technique was used to simulate the transport and fate of approximately 400,000 gallons of spilled diesel fuel and its impact on the aquatic biota in the Potomac River and Sugarland Run. Sugarland Run is a tributary about 21 miles upstream from Washington, DC. The mass balance model predicted the dynamic (spatial and temporal) distribution of spilled oil. The distributions were presented in terms of surface oil slick and sheen, dissolved and undissolved total petroleum hydrocarbons (TPH) in the water surface, water column, river sediments, shoreline and atmosphere. The processes simulated included advective movement, dispersion, dissolution, evaporation, volatilization, sedimentation, shoreline deposition, biodegradation, and removal of oil from cleanup operations. The model predicted that the spill resulted in a water column dissolved TPH concentration range of 0.05 to 18.6 ppm in Sugarland Run. The spilled oil traveled 10 miles along Sugarland Run before it reached the Potomac River. At the Potomac River, the water column TPH concentration was predicted to have decreased to the range of 0.0 to 0.43 ppm. These levels were consistent with field samples. To assess biological injury, the model used 4, 8, 24, 48, and 96-hr LC values in computing the fish injury caused by the fuel oil. The model used the maximum running average of dissolved TPH and exposure time to predict levels of fish mortality in the range of 38 to 40% in Sugarland Run. This prediction was consistent with field fisheries surveys. The model also computed the amount of spilled oil that adsorbed and settled into the river sediments
Running Club
2011-01-01
The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...
Christophe Delaere
2013-01-01
The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...
M. Chamizo
2012-01-01
On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...
Simulation modeling and analysis in safety. II
International Nuclear Information System (INIS)
Ayoub, M.A.
1981-01-01
The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)
Modeling salmonella Dublin into the dairy herd simulation model Simherd
DEFF Research Database (Denmark)
Kudahl, Anne Braad
2010-01-01
Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...
An Advanced HIL Simulation Battery Model for Battery Management System Testing
DEFF Research Database (Denmark)
Barreras, Jorge Varela; Fleischer, Christian; Christensen, Andreas Elkjær
2016-01-01
Developers and manufacturers of battery management systems (BMSs) require extensive testing of controller Hardware (HW) and Software (SW), such as analog front-end and performance of generated control code. In comparison with the tests conducted on real batteries, tests conducted on a state......-of-the-art hardware-in-the-loop (HIL) simulator can be more cost and time effective, easier to reproduce, and safer beyond the normal range of operation, especially at early stages in the development process or during fault insertion. In this paper, an HIL simulation battery model is developed for purposes of BMS...... testing on a commercial HIL simulator. A multicell electrothermal Li-ion battery (LIB) model is integrated in a system-level simulation. Then, the LIB system model is converted to C code and run in real time with the HIL simulator. Finally, in order to demonstrate the capabilities of the setup...
A mathematical model for the simulation of thermal transients in the water loop of IPEN
International Nuclear Information System (INIS)
Pontedeiro, A.C.
1980-01-01
A mathematical model for simulation of thermal transients in the water loop at the Instituto de Pesquisas Energeticas e Nucleares, Sao Paulo, Brasil, is developed. The model is based on energy equations applied to the components of the experimental water loop. The non-linear system of first order diferencial equations and of non-linear algebraic equations obtained through the utilization of the IBM 'System/360-Continous System Modeling Program' (CSMP) is resolved. An optimization of the running time of the computer is made and a typical simulation of the water loop is executed. (Author) [pt
A universal simulator for ecological models
DEFF Research Database (Denmark)
Holst, Niels
2013-01-01
Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....
Biological transportation networks: Modeling and simulation
Albi, Giacomo
2015-09-15
We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.
Reproducibility in Computational Neuroscience Models and Simulations
McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.
2016-01-01
Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845
A SIMULATION MODEL OF THE GAS COMPLEX
Directory of Open Access Journals (Sweden)
Sokolova G. E.
2016-06-01
Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.
Object Oriented Modelling and Dynamical Simulation
DEFF Research Database (Denmark)
Wagner, Falko Jens; Poulsen, Mikael Zebbelin
1998-01-01
This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...
Advanced feeder control using fast simulation models
Verheijen, O.S.; Op den Camp, O.M.G.C.; Beerkens, R.G.C.; Backx, A.C.P.M.; Huisman, L.; Drummond, C.H.
2005-01-01
For the automatic control of glass quality in glass production, the relation between process variable and product or glass quality and process conditions/process input parameters must be known in detail. So far, detailed 3-D glass melting simulation models were used to predict the effect of process
Modeling and Simulating Virtual Anatomical Humans
Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan
2014-01-01
This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course
Thermohydraulic modeling and simulation of breeder reactors
International Nuclear Information System (INIS)
Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.
1982-01-01
This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed
International Nuclear Information System (INIS)
Francescone, David; Akula, Sujeet; Altunkaynak, Baris; Nath, Pran
2015-01-01
Sparticle mass hierarchies contain significant information regarding the origin and nature of supersymmetry breaking. The hierarchical patterns are severely constrained by electroweak symmetry breaking as well as by the astrophysical and particle physics data. They are further constrained by the Higgs boson mass measurement. The sparticle mass hierarchies can be used to generate simplified models consistent with the high scale models. In this work we consider supergravity models with universal boundary conditions for soft parameters at the unification scale as well as supergravity models with nonuniversalities and delineate the list of sparticle mass hierarchies for the five lightest sparticles. Simplified models can be obtained by a truncation of these, retaining a smaller set of lightest particles. The mass hierarchies and their truncated versions enlarge significantly the list of simplified models currently being used in the literature. Benchmarks for a variety of supergravity unified models appropriate for SUSY searches at future colliders are also presented. The signature analysis of two benchmark models has been carried out and a discussion of the searches needed for their discovery at LHC Run-II is given. An analysis of the spin-independent neutralino-proton cross section exhibiting the Higgs boson mass dependence and the hierarchical patterns is also carried out. It is seen that a knowledge of the spin-independent neutralino-proton cross section and the neutralino mass will narrow down the list of the allowed sparticle mass hierarchies. Thus dark matter experiments along with analyses for the LHC Run-II will provide strong clues to the nature of symmetry breaking at the unification scale.
Parallel runs of a large air pollution model on a grid of Sun computers
DEFF Research Database (Denmark)
Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove
2004-01-01
Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...
1979-12-01
An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...
1979-12-01
An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...
Modeling Supermassive Black Holes in Cosmological Simulations
Tremmel, Michael
My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Advances in NLTE Modeling for Integrated Simulations
Energy Technology Data Exchange (ETDEWEB)
Scott, H A; Hansen, S B
2009-07-08
The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.
Speeding up N -body simulations of modified gravity: chameleon screening models
Energy Technology Data Exchange (ETDEWEB)
Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Speeding up N-body simulations of modified gravity: chameleon screening models
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Speeding up N -body simulations of modified gravity: chameleon screening models
International Nuclear Information System (INIS)
Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio; Barreira, Alexandre; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo
2017-01-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512 3 particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Mesoscopic modelling and simulation of soft matter.
Schiller, Ulf D; Krüger, Timm; Henrich, Oliver
2017-12-20
The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.
Possibilities of water run-off models by using geological information systems
International Nuclear Information System (INIS)
Oeverland, H.; Kleeberg, H.B.
1992-01-01
The movement of water in a given region is determined by a number of regional factors, e.g. land use and topography. However, the available precipitation-runoff models take little account of this regional information. Geological information systems, on the other hand, are instruments for efficient management, presentation and evaluation of local information, so the best approach would be a combination of the two types of models. The requirements to be met by such a system are listed; they result from the processes to be modelled (continuous runoff, high-water runoff, mass transfer) but also from the available data and their acquisition and processing. Ten of the best-known precipitation-runoff models are presented and evaluated on the basis of the requirements listed. The basic concept of an integrated model is outlined, and additional modulus required for modelling are defined. (orig./BBR) [de
Numerical model simulation of atmospheric coolant plumes
International Nuclear Information System (INIS)
Gaillard, P.
1980-01-01
The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr
Multiphase reacting flows modelling and simulation
Marchisio, Daniele L
2007-01-01
The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...
Software development infrastructure for the HYBRID modeling and simulation project
International Nuclear Information System (INIS)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk; Rabiti, Cristian; Greenwood, M. Scott
2016-01-01
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers
Software development infrastructure for the HYBRID modeling and simulation project
Energy Technology Data Exchange (ETDEWEB)
Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers
Energy Technology Data Exchange (ETDEWEB)
Unterguggenberger, Peter; Salbrechter, Sebastian; Jauk, Thomas; Wimmer, Andreas [Technische Univ. Graz (Austria). Inst. fuer Verbrennungskraftmaschinen und Thermodynamik (IVT)
2012-11-01
Currently, all potential must be tapped in order to reach the increasingly tighter CO{sub 2} limits for vehicles. From the variety of possible options for reducing fuel consumption, the contribution of improved heat management should not be ignored since increased friction during warm-up results in greater fuel consumption. Engine warm-up models that calculate thermal behavior and fuel consumption are a relatively inexpensive alternative to empirical measures. In order to achieve satisfactory simulation results, the exact modeling of thermal behavior as well as friction conditions is necessary. This paper identifies the demands placed on the individual submodels based on the requirements for precision that thermal warm-up models must meet. Before treating the friction model it will explain the development of the heat input model in detail. In addition, it presents the test program needed to establish and validate the simulation model with the required measurement accuracy. (orig.)
Advancing Material Models for Automotive Forming Simulations
International Nuclear Information System (INIS)
Vegter, H.; An, Y.; Horn, C.H.L.J. ten; Atzema, E.H.; Roelofsen, M.E.
2005-01-01
Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path.The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary.Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials.Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations prior
Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator
Lewis, Emily K.; Vuong, Nghia D.
2012-01-01
This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.
Modelling and simulation of thermal power plants
Energy Technology Data Exchange (ETDEWEB)
Eborn, J.
1998-02-01
Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs
Validity of microgravity simulation models on earth
DEFF Research Database (Denmark)
Regnard, J; Heer, M; Drummer, C
2001-01-01
Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect...... incomplete knowledge of the characteristics inherent to each model. During water immersion, the hydrostatic pressure lowers the peripheral vascular capacity and causes increased thoracic blood volume and high vascular perfusion. In turn, these changes lead to high urinary flow, low vasomotor tone, and a high...
Sperber, K. R.; Palmer, T. N.
1996-11-01
The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall
Co-simulation of dynamic systems in parallel and serial model configurations
International Nuclear Information System (INIS)
Sweafford, Trevor; Yoon, Hwan Sik
2013-01-01
Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.
Mathematical models and numerical simulation in electromagnetism
Bermúdez, Alfredo; Salgado, Pilar
2014-01-01
The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.
Modeling and simulation of economic processes
Directory of Open Access Journals (Sweden)
Bogdan Brumar
2010-12-01
Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.
Simulation as a surgical teaching model.
Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos
2018-01-01
Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
G. Rakness.
2013-01-01
After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...
Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum
2016-03-01
Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling and simulation of photovoltaic solar panel
International Nuclear Information System (INIS)
Belarbi, M.; Haddouche, K.; Midoun, A.
2006-01-01
In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit, open circuit, and maximum power. In the first step, the adopted approach concerns the resolution of the system of equations constituting the three operating points to write all the model parameters according to series resistance. Secondly, we make an iterative resolution at the optimal operating point by using the Newton-Raphson method to calculate the series resistance value as well as the model parameters. Once the panel model is identified, we consider other equations for taking into account the irradiance and temperature effect. The simulation results show the convergence speed of the model parameters and the possibility of visualizing the electrical behaviour of the panel according to the irradiance and temperature. Let us note that a sensitivity of the algorithm at the optimal operating point was observed owing to the fact that a small variation of the optimal voltage value leads to a very great variation of the identified parameters values. With the identified model, we can develop algorithms of maximum power point tracking, and make simulations of a solar water pumping system.(Author)
Deep Drawing Simulations With Different Polycrystalline Models
Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie
2004-06-01
The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.
Facebook's personal page modelling and simulation
Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.
2015-02-01
In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.
A simulation model for material accounting systems
International Nuclear Information System (INIS)
Coulter, C.A.; Thomas, K.E.
1987-01-01
A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line
Theory, modeling and simulation: Annual report 1993
Energy Technology Data Exchange (ETDEWEB)
Dunning, T.H. Jr.; Garrett, B.C.
1994-07-01
Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.
Theory, modeling and simulation: Annual report 1993
International Nuclear Information System (INIS)
Dunning, T.H. Jr.; Garrett, B.C.
1994-07-01
Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies
Alsaadi, Ahmad S.; Francis, Lijo; Maab, Husnul; Amy, Gary L.; Ghaffour, NorEddine
2015-01-01
The importance of removing non-condensable gases from air gap membrane distillation (AGMD) modules in improving the water vapor flux is presented in this paper. Additionally, a previously developed AGMD mathematical model is used to predict to the degree of flux enhancement under sub-atmospheric pressure conditions. Since the mathematical model prediction is expected to be very sensitive to membrane distillation (MD) membrane resistance when the mass diffusion resistance is eliminated, the permeability of the membrane was carefully measured with two different methods (gas permeance test and vacuum MD permeability test). The mathematical model prediction was found to highly agree with the experimental data, which showed that the removal of non-condensable gases increased the flux by more than three-fold when the gap pressure was maintained at the saturation pressure of the feed temperature. The importance of staging the sub-atmospheric AGMD process and how this could give better control over the gap pressure as the feed temperature decreases are also highlighted in this paper. The effect of staging on the sub-atmospheric AGMD flux and its relation to membrane capital cost are briefly discussed.
Alsaadi, Ahmad S.
2015-04-16
The importance of removing non-condensable gases from air gap membrane distillation (AGMD) modules in improving the water vapor flux is presented in this paper. Additionally, a previously developed AGMD mathematical model is used to predict to the degree of flux enhancement under sub-atmospheric pressure conditions. Since the mathematical model prediction is expected to be very sensitive to membrane distillation (MD) membrane resistance when the mass diffusion resistance is eliminated, the permeability of the membrane was carefully measured with two different methods (gas permeance test and vacuum MD permeability test). The mathematical model prediction was found to highly agree with the experimental data, which showed that the removal of non-condensable gases increased the flux by more than three-fold when the gap pressure was maintained at the saturation pressure of the feed temperature. The importance of staging the sub-atmospheric AGMD process and how this could give better control over the gap pressure as the feed temperature decreases are also highlighted in this paper. The effect of staging on the sub-atmospheric AGMD flux and its relation to membrane capital cost are briefly discussed.
Comparing the performance of SIMD computers by running large air pollution models
DEFF Research Database (Denmark)
Brown, J.; Hansen, Per Christian; Wasniewski, J.
1996-01-01
To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...
NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN
Directory of Open Access Journals (Sweden)
Petr Chmátal
2016-04-01
Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.
eShopper modeling and simulation
Petrushin, Valery A.
2001-03-01
The advent of e-commerce gives an opportunity to shift the paradigm of customer communication into a highly interactive mode. The new generation of commercial Web servers, such as the Blue Martini's server, combines the collection of data on a customer behavior with real-time processing and dynamic tailoring of a feedback page. The new opportunities for direct product marketing and cross selling are arriving. The key problem is what kind of information do we need to achieve these goals, or in other words, how do we model the customer? The paper is devoted to customer modeling and simulation. The focus is on modeling an individual customer. The model is based on the customer's transaction data, click stream data, and demographics. The model includes the hierarchical profile of a customer's preferences to different types of products and brands; consumption models for the different types of products; the current focus, trends, and stochastic models for time intervals between purchases; product affinity models; and some generalized features, such as purchasing power, sensitivity to advertising, price sensitivity, etc. This type of model is used for predicting the date of the next visit, overall spending, and spending for different types of products and brands. For some type of stores (for example, a supermarket) and stable customers, it is possible to forecast the shopping lists rather accurately. The forecasting techniques are discussed. The forecasting results can be used for on- line direct marketing, customer retention, and inventory management. The customer model can also be used as a generative model for simulating the customer's purchasing behavior in different situations and for estimating customer's features.
Surrogate model approach for improving the performance of reactive transport simulations
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2016-04-01
Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines
Weather regimes in past climate atmospheric general circulation model simulations
Energy Technology Data Exchange (ETDEWEB)
Kageyama, M.; Ramstein, G. [CEA Saclay, Gif-sur-Yvette (France). Lab. des Sci. du Climat et de l' Environnement; D' Andrea, F.; Vautard, R. [Laboratoire de Meteorologie Dynamique, Ecole Normale Superieure, Paris (France); Valdes, P.J. [Department of Meteorology, University of Reading (United Kingdom)
1999-10-01
We investigate the climates of the present-day, inception of the last glaciation (115000 y ago) and last glacial maximum (21000 y ago) in the extratropical north Atlantic and Europe, as simulated by the laboratoire de Meteorologie dynamique atmospheric general circulation model. We use these simulations to investigate the low-frequency variability of the model in different climates. The aim is to evaluate whether changes in the intraseasonal variability, which we characterize using weather regimes, can help describe the impact of different boundary conditions on climate and give a better understanding of climate change processes. Weather regimes are defined as the most recurrent patterns in the 500 hPa geopotential height, using a clustering algorithm method. The regimes found in the climate simulations of the present-day and inception of the last glaciation are similar in their number and their structure. It is the regimes' populations which are found to be different for these climates, with an increase of the model's blocked regime and a decrease in the zonal regime at the inception of the last glaciation. This description reinforces the conclusions from a study of the differences between the climatological averages of the different runs and confirms the northeastward shift to the tail of the Atlantic storm-track, which would favour more precipitation over the site of growth of the Fennoscandian ice-sheet. On the other hand, the last glacial maximum results over this sector are not found to be classifiable, showing that the change in boundary conditions can be responsible for severe changes in the weather regime and low-frequency dynamics. The LGM Atlantic low-frequency variability appears to be dominated by a large-scale retrogressing wave with a period 40 to 50 days. (orig.)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Boggs, Katelyn N; Kakalec, Peter A; Smith, Meghann L; Howell, Stefanie N; Flinn, Jane M
2017-12-01
Circadian rhythms are altered in several diseases associated with aging, one of which is Alzheimer's disease (AD). One example of a circadian rhythm is the rest-activity cycle, which can be measured in mice by monitoring their wheel-running. The present study sought to investigate differences in light phase/dark phase activity between a mouse model of late onset AD (APP/E4) and control (C57Bl6J) mice, in both the pre-plaque and post-plaques stages of the disease. To assess activity level, 24-h wheel running behavior was monitored at six months (pre-plaque) and twelve months (post-plaque) for a period of nine days. The following measures were analyzed: counts (wheel rotations) during the dark phase, counts during the light phase, hour of activity onset, and hour of activity offset. Key findings indicate that activity onset is delayed in APP/E4 mice at six and twelve months, and activity profiles for APP/E4 and C57Bl6J mice differ during the light and dark phase in such a way that APP/E4 mice run less in the early hours of the dark phase and more in the later hours of the dark phase compared to C57Bl6J mice. These findings imply that rest-activity cycle is altered in the pre-plaque stages of AD in APP/E4 mice, as they show impairments as early as six months of age. Copyright © 2017 Elsevier Inc. All rights reserved.
Fate of pesticides in field ditches: the TOXSWA simulation model
Adriaanse, P.I.
1996-01-01
The TOXSWA model describes the fate of pesticides entering field ditches by spray drift, atmospheric deposition, surface run-off, drainage or leaching. It considers four processes: transport, transformation, sorption and volatilization. Analytical andnumerical solutions corresponded well. A sample
A collision model in plasma particle simulations
International Nuclear Information System (INIS)
Ma Yanyun; Chang Wenwei; Yin Yan; Yue Zongwu; Cao Lihua; Liu Daqing
2000-01-01
In order to offset the collisional effects reduced by using finite-size particles, β particle clouds are used in particle simulation codes (β is the ratio of charge or mass of modeling particles to real ones). The method of impulse approximation (strait line orbit approximation) is used to analyze the scattering cross section of β particle clouds plasmas. The authors can obtain the relation of the value of a and β and scattering cross section (a is the radius of β particle cloud). By using this relation the authors can determine the value of a and β so that the collisional effects of the modeling system is correspondent with the real one. The authors can also adjust the values of a and β so that the authors can enhance or reduce the collisional effects fictitiously. The results of simulation are in good agreement with the theoretical ones
Macro Level Simulation Model Of Space Shuttle Processing
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
High-Fidelity Roadway Modeling and Simulation
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Difficulties with True Interoperability in Modeling & Simulation
2011-12-01
Standards in M&S cover multiple layers of technical abstraction. There are middleware specifica- tions, such as the High Level Architecture (HLA) ( IEEE Xplore ... IEEE Xplore Digital Library. 2010. 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Framework and Rules...using different communication protocols being able to allow da- 2642978-1-4577-2109-0/11/$26.00 ©2011 IEEE Report Documentation Page Form ApprovedOMB No
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...
Mathematical models for photovoltaic solar panel simulation
Energy Technology Data Exchange (ETDEWEB)
Santos, Jose Airton A. dos; Gnoatto, Estor; Fischborn, Marcos; Kavanagh, Edward [Universidade Tecnologica Federal do Parana (UTFPR), Medianeira, PR (Brazil)], Emails: airton@utfpr.edu.br, gnoatto@utfpr.edu.br, fisch@utfpr.edu.br, kavanagh@utfpr.edu.br
2008-07-01
A photovoltaic generator is subject to several variations of solar intensity, ambient temperature or load, that change your point of operation. This way, your behavior should be analyzed by such alterations, to optimize your operation. The present work sought to simulate a photovoltaic generator, of polycrystalline silicon, by characteristics supplied by the manufacturer, and to compare the results of two mathematical models with obtained values of field, in the city of Cascavel, for a period of one year. (author)
Energy Technology Data Exchange (ETDEWEB)
Caruso, Gianfranco, E-mail: gianfranco.caruso@uniroma1.it [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Giannetti, Fabio [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Porfiri, Maria Teresa [ENEA FUS C.R. Frascati, Via Enrico Fermi, 45, 00044 Frascati, Roma (Italy)
2013-12-15
Highlights: • The CONSEN code for thermal-hydraulic transients in fusion plants is introduced. • A magnet induced confinement bypass accident in ITER has been simulated. • A comparison with previous MELCOR results for the accident is presented. -- Abstract: The CONSEN (CONServation of ENergy) code is a fast running code to simulate thermal-hydraulic transients, specifically developed for fusion reactors. In order to demonstrate CONSEN capabilities, the paper deals with the accident analysis of the magnet induced confinement bypass for ITER design 1996. During a plasma pulse, a poloidal field magnet experiences an over-voltage condition or an electrical insulation fault that results in two intense electrical arcs. It is assumed that this event produces two one square meters ruptures, resulting in a pathway that connects the interior of the vacuum vessel to the cryostat air space room. The rupture results also in a break of a single cooling channel within the wall of the vacuum vessel and a breach of the magnet cooling line, causing the blow down of a steam/water mixture in the vacuum vessel and in the cryostat and the release of 4 K helium into the cryostat. In the meantime, all the magnet coils are discharged through the magnet protection system actuation. This postulated event creates the simultaneous failure of two radioactive confinement barrier and it envelopes all type of smaller LOCAs into the cryostat. Ice formation on the cryogenic walls is also involved. The accident has been simulated with the CONSEN code up to 32 h. The accident evolution and the phenomena involved are discussed in the paper and the results are compared with available results obtained using the MELCOR code.
Modelling interplanetary CMEs using magnetohydrodynamic simulations
Directory of Open Access Journals (Sweden)
P. J. Cargill
Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.
Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies
Interactive Modelling and Simulation of Human Motion
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol
menneskers led, der udviser både ikke-konveksitet og flere frihedsgrader • En generel og alsidig model for aktivering af bløde legemer. Modellen kan anvendes som et animations værktøj, men er lige så velegnet til simulering af menneskelige muskler, da den opfylder de grundlæggende fysiske principper......Dansk resumé Denne ph.d.-afhandling beskæftiger sig med modellering og simulation af menneskelig bevægelse. Emnerne i denne afhandling har mindst to ting til fælles. For det første beskæftiger de sig med menneskelig bevægelse. Selv om de udviklede modeller også kan benyttes til andre ting,er det...... primære fokus på at modellere den menneskelige krop. For det andet, beskæftiger de sig alle med simulering som et redskab til at syntetisere bevægelse og dermed skabe animationer. Dette er en vigtigt pointe, da det betyder, at vi ikke kun skaber værktøjer til animatorer, som de kan bruge til at lave sjove...
Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model
Energy Technology Data Exchange (ETDEWEB)
Meloni, Davide [Dipartimento di Matematica e Fisica, Università di Roma Tre,Via della Vasca Navale 84, 00146 Rome (Italy); Ohlsson, Tommy; Riad, Stella [Department of Physics, School of Engineering Sciences,KTH Royal Institute of Technology - AlbaNova University Center,Roslagstullsbacken 21, 106 91 Stockholm (Sweden)
2017-03-08
We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10{sub H}, 120{sub H}, and 126{sub H} representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M{sub I}. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10{sub H} and 126{sub H} representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.
Is running away right? The behavioral activation-behavioral inhibition model of anterior asymmetry.
Wacker, Jan; Chavanon, Mira-Lynn; Leue, Anja; Stemmler, Gerhard
2008-04-01
The measurement of anterior electroencephalograph (EEG) asymmetries has become an important standard paradigm for the investigation of affective states and traits. Findings in this area are typically interpreted within the motivational direction model, which suggests a lateralization of approach and withdrawal motivational systems to the left and right anterior region, respectively. However, efforts to compare this widely adopted model with an alternative account-which relates the left anterior region to behavioral activation independent of the direction of behavior (approach or withdrawal) and the right anterior region to goal conflict-induced behavioral inhibition-are rare and inconclusive. Therefore, the authors measured the EEG in a sample of 93 young men during emotional imagery designed to provide a critical test between the 2 models. The results (e.g., a correlation between left anterior activation and withdrawal motivation) favor the alternative model on the basis of the concepts of behavioral activation and behavioral inhibition. In addition, the present study also supports an association of right parietal activation with physiological arousal and the conceptualization of parietal EEG asymmetry as a mediator of emotion-related physiological arousal. (Copyright) 2008 APA.
MODELING AND SIMULATION OF A HYDROCRACKING UNIT
Directory of Open Access Journals (Sweden)
HASSAN A. FARAG
2016-06-01
Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.
On Improving 4-km Mesoscale Model Simulations
Deng, Aijun; Stauffer, David R.
2006-03-01
A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6
Reactive transport models and simulation with ALLIANCES
International Nuclear Information System (INIS)
Leterrier, N.; Deville, E.; Bary, B.; Trotignon, L.; Hedde, T.; Cochepin, B.; Stora, E.
2009-01-01
Many chemical processes influence the evolution of nuclear waste storage. As a result, simulations based only upon transport and hydraulic processes fail to describe adequately some industrial scenarios. We need to take into account complex chemical models (mass action laws, kinetics...) which are highly non-linear. In order to simulate the coupling of these chemical reactions with transport, we use a classical Sequential Iterative Approach (SIA), with a fixed point algorithm, within the mainframe of the ALLIANCES platform. This approach allows us to use the various transport and chemical modules available in ALLIANCES, via an operator-splitting method based upon the structure of the chemical system. We present five different applications of reactive transport simulations in the context of nuclear waste storage: 1. A 2D simulation of the lixiviation by rain water of an underground polluted zone high in uranium oxide; 2. The degradation of the steel envelope of a package in contact with clay. Corrosion of the steel creates corrosion products and the altered package becomes a porous medium. We follow the degradation front through kinetic reactions and the coupling with transport; 3. The degradation of a cement-based material by the injection of an aqueous solution of zinc and sulphate ions. In addition to the reactive transport coupling, we take into account in this case the hydraulic retroaction of the porosity variation on the Darcy velocity; 4. The decalcification of a concrete beam in an underground storage structure. In this case, in addition to the reactive transport simulation, we take into account the interaction between chemical degradation and the mechanical forces (cracks...), and the retroactive influence on the structure changes on transport; 5. The degradation of the steel envelope of a package in contact with a clay material under a temperature gradient. In this case the reactive transport simulation is entirely directed by the temperature changes and
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Computer Models Simulate Fine Particle Dispersion
2010-01-01
Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.
Consolidation modelling for thermoplastic composites forming simulation
Xiong, H.; Rusanov, A.; Hamila, N.; Boisse, P.
2016-10-01
Pre-impregnated thermoplastic composites are widely used in the aerospace industry for their excellent mechanical properties, Thermoforming thermoplastic prepregs is a fast manufacturing process, the automotive industry has shown increasing interest in this manufacturing processes, in which the reconsolidation is an essential stage. The model of intimate contact is investigated as the consolidation model, compression experiments have been launched to identify the material parameters, several numerical tests show the influents of the temperature and pressure applied during processing. Finally, a new solid-shell prismatic element has been presented for the simulation of consolidation step in the thermoplastic composites forming process.
Quantification of uncertainties of modeling and simulation
International Nuclear Information System (INIS)
Ma Zhibo; Yin Jianwei
2012-01-01
The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)
Simulation models generator. Applications in scheduling
Directory of Open Access Journals (Sweden)
Omar Danilo Castrillón
2013-08-01
Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building
Modeling and simulation of reactive flows
Bortoli, De AL; Pereira, Felipe
2015-01-01
Modelling and Simulation of Reactive Flows presents information on modeling and how to numerically solve reactive flows. The book offers a distinctive approach that combines diffusion flames and geochemical flow problems, providing users with a comprehensive resource that bridges the gap for scientists, engineers, and the industry. Specifically, the book looks at the basic concepts related to reaction rates, chemical kinetics, and the development of reduced kinetic mechanisms. It considers the most common methods used in practical situations, along with equations for reactive flows, and va
Nonlinear friction model for servo press simulation
Ma, Ninshu; Sugitomo, Nobuhiko; Kyuno, Takunori; Tamura, Shintaro; Naka, Tetsuo
2013-12-01
The friction coefficient was measured under an idealized condition for a pulse servo motion. The measured friction coefficient and its changing with both sliding distance and a pulse motion showed that the friction resistance can be reduced due to the re-lubrication during unloading process of the pulse servo motion. Based on the measured friction coefficient and its changes with sliding distance and re-lubrication of oil, a nonlinear friction model was developed. Using the newly developed the nonlinear friction model, a deep draw simulation was performed and the formability was evaluated. The results were compared with experimental ones and the effectiveness was verified.
Directory of Open Access Journals (Sweden)
Juan P. Pérez Monsalve
2014-12-01
Full Text Available This work analyzed the relationship of the two main Price indicators in the Colombian economy, the IPP and the IPC. For this purpose, we identified the theory comprising both indexes to then develop a vector autoregressive model, which shows the reaction to shocks both in itself as in the other variable, whose impact continues propagating in the long term. Additionally, the work presents a simulation of the VAR model through the Monte Carlo method, verifying the coincidence in distributions of probability and volatility levels, as well as the existence correlation over time
Directory of Open Access Journals (Sweden)
Rahadian Yodha Bhakti
2016-02-01
Full Text Available The purpose of this study was to determine the products of The Development of Learning Colorful Balls Run for Motion Reaction of mentally disabled children of SLB Negeri Semarang grade V in the academic year of 2015.. This research is the development (research and development / R & D, which consists of 10 steps of research, namely the potential and problems, data collection, product design, design validation, design revisions, test products, product revision, trial use, testing products, mass production Because the average obtained from the experts of physical education teacher 80% (good and from learning experts gained 92% (very good. The results of trial I product of small group on cognitive aspects was 83.53% (good, affective aspects was 82.10% (good, psychomotor aspects was 81.39% (good, the average of trial I was 82.34% (good. The results of trial II, the large group in the cognitive aspects was 85.14% (good affective aspects was 83.76% (good psychomotor aspects was 83.07% (good, the average of trial II was 83.99% (good. It was concluded that the development of colorful balls run game model can be used as an alternative to learn sport especially small ball game for V graders of SLB Negeri Semarang.
Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1.
Blanke, Monika; Buras, Andrzej J; Recksiegel, Stefan
2016-01-01
The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard [Formula: see text] and [Formula: see text] gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. Taking into account the constraints from [Formula: see text] processes, significant departures from the SM predictions for [Formula: see text] and [Formula: see text] are possible, while the effects in B decays are much smaller. In particular, the LHT model favours [Formula: see text], which is not supported by the data, and the present anomalies in [Formula: see text] decays cannot be explained in this model. With the recent lattice and large N input the imposition of the [Formula: see text] constraint implies a significant suppression of the branching ratio for [Formula: see text] with respect to its SM value while allowing only for small modifications of [Formula: see text]. Finally, we investigate how the LHT physics could be distinguished from other models by means of indirect measurements and
Modelling long run strategic behaviour on the liberalised European gas market
International Nuclear Information System (INIS)
Mulder, Machiel; Zwart, Gijsbert
2005-01-01
In gas markets, intertemporal constraints are of particular importance due to the finiteness of gas resources. In particular in the UK and the Netherlands, gas resources are expected to dry up on the medium term, giving rise to a positive resource rent of the gas. On shorter time scales, decisions on investments in production, transmission, storage and LNG terminal capacities affect short term output decisions in following years, while within the year prices across seasons are related through storage decisions. We develop a model of strategic behaviour on the European gas markets that incorporates such intertemporal relations. We take into account interactions between strategic producers of gas, price-taking transmission companies, and traders arbitraging the markets by transporting gas across borders, storing gas across seasons, and importing LNG. As a case study, we use the model to explore the impacts on producer behaviour and infrastructure investments of a cap on production from a large gas field in the Netherlands. (Author)
Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1
Blanke, Monika; Recksiegel, Stefan
2016-04-02
The Littlest Higgs Model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. We present a new analysis of quark observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare $K$ and $B$ decays are still allowed to depart from their SM values. This includes $K^+\\to\\pi^+\
TMS modeling toolbox for realistic simulation.
Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong
2010-01-01
Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.
Biomedical Simulation Models of Human Auditory Processes
Bicak, Mehmet M. A.
2012-01-01
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan
2016-10-01
Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies
Klopf, M.; Pietsch, S. A.; Hasenauer, H.
2009-04-01
The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts
Modeling and simulation of gamma camera
International Nuclear Information System (INIS)
Singh, B.; Kataria, S.K.; Samuel, A.M.
2002-08-01
Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced
Model for Simulating a Spiral Software-Development Process
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code
Fast Atmosphere-Ocean Model Runs with Large Changes in CO2
Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.
2013-01-01
How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.
Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS
Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.
2009-01-01
Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.
Systematic simulations of modified gravity: chameleon models
International Nuclear Information System (INIS)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo
2013-01-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future
Systematic simulations of modified gravity: chameleon models
Energy Technology Data Exchange (ETDEWEB)
Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.
[Modeling and Simulation of Spectral Polarimetric BRDF].
Ling, Jin-jiang; Li, Gang; Zhang, Ren-bin; Tang, Qian; Ye, Qiu
2016-01-01
Under the conditions of the polarized light, The reflective surface of the object is affected by many factors, refractive index, surface roughness, and so the angle of incidence. For the rough surface in the different wavelengths of light exhibit different reflection characteristics of polarization, a spectral polarimetric BRDF based on Kirchhof theory is proposee. The spectral model of complex refraction index is combined with refraction index and extinction coefficient spectral model which were got by using the known complex refraction index at different value. Then get the spectral model of surface roughness derived from the classical surface roughness measuring method combined with the Fresnel reflection function. Take the spectral model of refraction index and roughness into the BRDF model, then the spectral polarimetirc BRDF model is proposed. Compare the simulation results of the refractive index varies with wavelength, roughness is constant, the refraction index and roughness both vary with wavelength and origin model with other papers, it shows that, the spectral polarimetric BRDF model can show the polarization characteristics of the surface accurately, and can provide a reliable basis for the application of polarization remote sensing, and other aspects of the classification of substances.
A realistic intersecting D6-brane model after the first LHC run
Li, Tianjun; Nanopoulos, D. V.; Raza, Shabbar; Wang, Xiao-Chuan
2014-08-01
With the Higgs boson mass around 125 GeV and the LHC supersymmetry search constraints, we revisit a three-family Pati-Salam model from intersecting D6-branes in Type IIA string theory on the T 6/(ℤ2 × ℤ2) orientifold which has a realistic phenomenology. We systematically scan the parameter space for μ 0, and find that the gravitino mass is generically heavier than about 2 TeV for both cases due to the Higgs mass low bound 123 GeV. In particular, we identify a region of parameter space with the electroweak fine-tuning as small as Δ EW ~ 24-32 (3-4%). In the viable parameter space which is consistent with all the current constraints, the mass ranges for gluino, the first two-generation squarks and sleptons are respectively [3, 18] TeV, [3, 16] TeV, and [2, 7] TeV. For the third-generation sfermions, the light stop satisfying 5 σ WMAP bounds via neutralino-stop coannihilation has mass from 0.5 to 1.2 TeV, and the light stau can be as light as 800 GeV. We also show various coannihilation and resonance scenarios through which the observed dark matter relic density is achieved. Interestingly, the certain portions of parameter space has excellent t- b- τ and b- τ Yukawa coupling unification. Three regions of parameter space are highlighted as well where the dominant component of the lightest neutralino is a bino, wino or higgsino. We discuss various scenarios in which such solutions may avoid recent astrophysical bounds in case if they satisfy or above observed relic density bounds. Prospects of finding higgsino-like neutralino in direct and indirect searches are also studied. And we display six tables of benchmark points depicting various interesting features of our model. Note that the lightest neutralino can be heavy up to 2.8 TeV, and there exists a natural region of parameter space from low-energy fine-tuning definition with heavy gluino and first two-generation squarks/sleptons, we point out that the 33 TeV and 100 TeV proton-proton colliders are indeed
Hybrid simulation models for data-intensive systems
AUTHOR|(INSPIRE)INSPIRE-00473067
Data-intensive systems are used to access and store massive amounts of data by combining the storage resources of multiple data-centers, usually deployed all over the world, in one system. This enables users to utilize these massive storage capabilities in a simple and efficient way. However, with the growth of these systems it becomes a hard problem to estimate the effects of modifications to the system, such as data placement algorithms or hardware upgrades, and to validate these changes for potential side effects. This thesis addresses the modeling of operational data-intensive systems and presents a novel simulation model which estimates the performance of system operations. The running example used throughout this thesis is the data-intensive system Rucio, which is used as the data man- agement system of the ATLAS experiment at CERN’s Large Hadron Collider. Existing system models in literature are not applicable to data-intensive workflows, as they only consider computational workflows or make assumpti...
Daily House Price Indices: Construction, Modeling, and Longer-Run Predictions
DEFF Research Database (Denmark)
Bollerslev, Tim; Patton, Andrew J.; Wang, Wenjing
We construct daily house price indices for ten major U.S. metropolitan areas. Our calculations are based on a comprehensive database of several million residential property transactions and a standard repeat-sales method that closely mimics the methodology of the popular monthly Case-Shiller house...... price indices. Our new daily house price indices exhibit dynamic features similar to those of other daily asset prices, with mild autocorrelation and strong conditional heteroskedasticity of the corresponding daily returns. A relatively simple multivariate time series model for the daily house price...... index returns, explicitly allowing for commonalities across cities and GARCH effects, produces forecasts of monthly house price changes that are superior to various alternative forecast procedures based on lower frequency data....
Running Club
2010-01-01
The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...
Christophe Delaere
2012-01-01
On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...
C. Delaere
2012-01-01
With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...
Directory of Open Access Journals (Sweden)
Rafaa Saaidia
2017-12-01
Full Text Available This article is a report on a simulation based on Computational Fluid Dynamics (CFD and an empirical investigation of in-cylinder flow characteristics, In addition, it assesses the performance and emission levels of a commercial-spark ignited engine running on a CNG and Hydrogen blend in different ratios. The main objective was to determine the optimum hydrogen ratio that would yield the best brake torque and release the least polluting gases. The in-cylinder flow velocity and turbulence aspects were investigated during the intake stroke in order to analyze the intake flow behavior. To reach this goal, a 3D CFD code was adopted. For various engine speeds were investigated for gasoline, CNG and hydrogen and CNG blend (HCNG fueled engines via external mixtures. The variation of brake torque (BT, NOX and CO emissions. A series of tests were conducted on the engine within the speed range of 1000 to 5000 rpm. For this purpose, a commercial Hyundai Sonata S.I engine was modified to operate with a blend of CNG and Hydrogen in different ratios. The experiments attempted to determine the optimum allowable hydrogen ratio with CNG for normal engine operation. The engine performance and the emission levels were also analyzed. At the engine speed of 4200 rpm, the results revealed that beyond a ratio of 50% of the volume of hydrogen added to CNG a backfire phenomenon appeared. Below this ratio (0~40% of the hydrogen volume, the CNG and Hydrogen blend seemed to be beneficial for the engine performance and for curtailing the emission level. However, at low engine speeds, the NOX concentration increased simultaneously with hydrogen content. In contrast, at high engine speeds, the NOX concentration decreased to its lowest level compared to that reached with gasoline as a running fuel. The concentration levels of HC, CO2, and CO decreased with the increase of hydrogen percentage.
Tokamak Simulation Code modeling of NSTX
International Nuclear Information System (INIS)
Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.
2000-01-01
The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.
Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations
Dias Astros, Maria Isabel
2017-01-01
In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.
BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.
Forecasting Lightning Threat using Cloud-Resolving Model Simulations
McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.
2008-01-01
simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.
Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M
2009-11-01
This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.
George, D. L.; Iverson, R. M.
2012-12-01
much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.
Simulations, evaluations and models. Vol. 1
International Nuclear Information System (INIS)
Brehmer, B.; Leplat, J.
1992-01-01
Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)
Process model simulations of the divergence effect
Anchukaitis, K. J.; Evans, M. N.; D'Arrigo, R. D.; Smerdon, J. E.; Hughes, M. K.; Kaplan, A.; Vaganov, E. A.
2007-12-01
We explore the extent to which the Vaganov-Shashkin (VS) model of conifer tree-ring formation can explain evidence for changing relationships between climate and tree growth over recent decades. The VS model is driven by daily environmental forcing (temperature, soil moisture, and solar radiation), and simulates tree-ring growth cell-by-cell as a function of the most limiting environmental control. This simplified representation of tree physiology allows us to examine using a selection of case studies whether instances of divergence may be explained in terms of changes in limiting environmental dependencies or transient climate change. Identification of model-data differences permits further exploration of the effects of tree-ring standardization, atmospheric composition, and additional non-climatic factors.
Radiation Modeling with Direct Simulation Monte Carlo
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Traffic flow dynamics data, models and simulation
Treiber, Martin
2013-01-01
This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on ...
Biomechanics trends in modeling and simulation
Ogden, Ray
2017-01-01
The book presents a state-of-the-art overview of biomechanical and mechanobiological modeling and simulation of soft biological tissues. Seven well-known scientists working in that particular field discuss topics such as biomolecules, networks and cells as well as failure, multi-scale, agent-based, bio-chemo-mechanical and finite element models appropriate for computational analysis. Applications include arteries, the heart, vascular stents and valve implants as well as adipose, brain, collagenous and engineered tissues. The mechanics of the whole cell and sub-cellular components as well as the extracellular matrix structure and mechanotransduction are described. In particular, the formation and remodeling of stress fibers, cytoskeletal contractility, cell adhesion and the mechanical regulation of fibroblast migration in healing myocardial infarcts are discussed. The essential ingredients of continuum mechanics are provided. Constitutive models of fiber-reinforced materials with an emphasis on arterial walls ...
Simulations and cosmological inference: A statistical model for power spectra means and covariances
International Nuclear Information System (INIS)
Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles
2008-01-01
We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.
Qualitative simulation in formal process modelling
International Nuclear Information System (INIS)
Sivertsen, Elin R.
1999-01-01
In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)
Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.
Zhang, Xuyang; Goh, Kean S
2015-11-01
Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Traffic flow dynamics. Data, models and simulation
Energy Technology Data Exchange (ETDEWEB)
Treiber, Martin [Technische Univ. Dresden (Germany). Inst. fuer Wirtschaft und Verkehr; Kesting, Arne [TomTom Development Germany GmbH, Berlin (Germany)
2013-07-01
First comprehensive textbook of this fascinating interdisciplinary topic which explains advances in a way that it is easily accessible to engineering, physics and math students. Presents practical applications of traffic theory such as driving behavior, stability analysis, stop-and-go waves, and travel time estimation. Presents the topic in a novel and systematic way by addressing both microscopic and macroscopic models with a focus on traffic instabilities. Revised and extended edition of the German textbook ''Verkehrsdynamik und -simulation''. This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on traffic instabilities and model calibration/validation present these topics in a novel and systematic way. Finally, the theoretical framework is shown at work in selected applications such as traffic-state and travel-time estimation, intelligent transportation systems, traffic operations management, and a detailed physics-based model for fuel consumption and emissions.