WorldWideScience

Sample records for model simulations run

  1. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  2. Debris flow run-out simulation and analysis using a dynamic model

    Science.gov (United States)

    Melo, Raquel; van Asch, Theo; Zêzere, José L.

    2018-02-01

    Only two months after a huge forest fire occurred in the upper part of a valley located in central Portugal, several debris flows were triggered by intense rainfall. The event caused infrastructural and economic damage, although no lives were lost. The present research aims to simulate the run-out of two debris flows that occurred during the event as well as to calculate via back-analysis the rheological parameters and the excess rain involved. Thus, a dynamic model was used, which integrates surface runoff, concentrated erosion along the channels, propagation and deposition of flow material. Afterwards, the model was validated using 32 debris flows triggered during the same event that were not considered for calibration. The rheological and entrainment parameters obtained for the most accurate simulation were then used to perform three scenarios of debris flow run-out on the basin scale. The results were confronted with the existing buildings exposed in the study area and the worst-case scenario showed a potential inundation that may affect 345 buildings. In addition, six streams where debris flow occurred in the past and caused material damage and loss of lives were identified.

  3. Simulation of nonlinear wave run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    This paper considers the numerical simulation of nonlinear wave run-up within a highly accurate Boussinesq-type model. Moving wet–dry boundary algorithms based on so-called extrapolating boundary techniques are utilized, and a new variant of this approach is proposed in two horizontal dimensions....... As validation, computed results involving the nonlinear run-up of periodic as well as transient waves on a sloping beach are considered in a single horizontal dimension, demonstrating excellent agreement with analytical solutions for both the free surface and horizontal velocity. In two horizontal dimensions...... cases involving long wave resonance in a parabolic basin, solitary wave evolution in a triangular channel, and solitary wave run-up on a circular conical island are considered. In each case the computed results compare well against available analytical solutions or experimental measurements. The ability...

  4. Design and Development of a Model to Simulate 0-G Treadmill Running Using the European Space Agency's Subject Loading System

    Science.gov (United States)

    Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.

    2010-01-01

    Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.

  5. Numerical simulation of transoceanic propagation and run-up of tsunami

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yong-Sik; Yoon Sung-Bum [Hanyang University, Seoul(Korea)

    2001-04-30

    The propagation and associated run-up process of tsunami are numerically investigated in this study. A transoceanic propagation model is first used to simulate the distant propagation of tsunamis. An inundation model is then employed to simulate the subsequent run-up process near coastline. A case study is done for the 1960 Chilean tsunami. A detailed maximum inundation map at Hilo Bay is obtained and compared with field observation and other numerical model, predictions. A very reasonable agreement is observed. (author). refs., tabs., figs.

  6. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  7. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  8. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  9. Towards a complex systems approach in sports injury research: simulating running-related injury development with agent-based modelling.

    Science.gov (United States)

    Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M

    2018-06-18

    There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  11. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  12. Humans running in place on water at simulated reduced gravity.

    Directory of Open Access Journals (Sweden)

    Alberto E Minetti

    Full Text Available BACKGROUND: On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. METHODOLOGY/PRINCIPAL FINDINGS: We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. CONCLUSIONS/SIGNIFICANCE: The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.

  13. Development of a simulation model for compression ignition engine running with ignition improved blend

    Directory of Open Access Journals (Sweden)

    Sudeshkumar Ponnusamy Moranahalli

    2011-01-01

    Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.

  14. ATLAS simulation of boson plus jets processes in Run 2

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.

  15. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Science.gov (United States)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  16. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Directory of Open Access Journals (Sweden)

    A. Najafi-Jilani

    2014-06-01

    Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  17. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    Science.gov (United States)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  18. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  19. Polarization simulations in the RHIC run 15 lattice

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Luo, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Ranjbar, V. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Robert-Demolaize, G. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; White, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2015-05-03

    RHIC polarized proton Run 15 uses a new acceleration ramp optics, compared to RHIC Run 13 and earlier runs, in relation with electron-lens beam-beam compensation developments. The new optics induces different strengths in the depolarizing snake resonance sequence, from injection to top energy. As a consequence, polarization transport along the new ramp has been investigated, based on spin tracking simulations. Sample results are reported and discussed.

  20. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  1. Development of Fast-Running Simulation Methodology Using Neural Networks for Load Follow Operation

    International Nuclear Information System (INIS)

    Seong, Seung-Hwan; Park, Heui-Youn; Kim, Dong-Hoon; Suh, Yong-Suk; Hur, Seop; Koo, In-Soo; Lee, Un-Chul; Jang, Jin-Wook; Shin, Yong-Chul

    2002-01-01

    A new fast-running analytic model has been developed for analyzing the load follow operation. The new model was based on the neural network theory, which has the capability of modeling the input/output relationships of a nonlinear system. The new model is made up of two error back-propagation neural networks and procedures to calculate core parameters, such as the distributions and density of xenon in a quasi-steady-state core like load follow operation. One neural network is designed to retrieve the axial offset of power distribution, and the other is for reactivity corresponding to a given core condition. The training data sets for learning the neural networks in the new model are generated with a three-dimensional nodal code and, also, the measured data of the first-day test of load follow operation. Using the new model, the simulation results of the 5-day load follow test in a pressurized water reactor show a good agreement between the simulation data and the actual measured data. Required computing time for simulating a load follow operation is comparable to that of a fast-running lumped model. Moreover, the new model does not require additional engineering factors to compensate for the difference between the actual measurements and analysis results because the neural network has the inherent learning capability of neural networks to new situations

  2. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  3. Experience gained in running the EPRI MMS code with an in-house simulation language

    International Nuclear Information System (INIS)

    Weber, D.S.

    1987-01-01

    The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language

  4. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh landslide-induced tsunamis. The extension is straight forward, requiring only....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  5. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  6. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)

    2017-05-24

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).

  7. Integrated building and system simulation using run-time coupled distributed models

    NARCIS (Netherlands)

    Trcka, M.; Hensen, J.L.M.; Wijsman, A.J.T.M.

    2006-01-01

    In modeling and simulation of real building and heating, ventilating, and air-conditioning (HVAC) system configurations, it is frequently found that certain parts can be represented in one simulation software, while models for other parts of the configuration are only available in other software.

  8. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    Larsen, N.

    1987-03-01

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  9. Regional model simulations of New Zealand climate

    Science.gov (United States)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  10. Dynamics of the in-run in ski jumping: a simulation study.

    Science.gov (United States)

    Ettema, Gertjan J C; Bråten, Steinar; Bobbert, Maarten F

    2005-08-01

    A ski jumper tries to maintain an aerodynamic position in the in-run during changing environmental forces. The purpose of this study was to analyze the mechanical demands on a ski jumper taking the in-run in a static position. We simulated the in-run in ski jumping with a 4-segment forward dynamic model (foot, leg, thigh, and upper body). The curved path of the in-run was used as kinematic constraint, and drag, lift, and snow friction were incorporated. Drag and snow friction created a forward rotating moment that had to be counteracted by a plantar flexion moment and caused the line of action of the normal force to pass anteriorly to the center of mass continuously. The normal force increased from 0.88 G on the first straight to 1.65 G in the curve. The required knee joint moment increased more because of an altered center of pressure. During the transition from the straight to the curve there was a rapid forward shift of the center of pressure under the foot, reflecting a short but high angular acceleration. Because unrealistically high rates of change of moment are required, an athlete cannot do this without changing body configuration which reduces the required rate of moment changes.

  11. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  12. runDM: Running couplings of Dark Matter to the Standard Model

    Science.gov (United States)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  13. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    Science.gov (United States)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  14. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length

  15. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.

  16. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  17. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    Science.gov (United States)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows

  18. 1-D blood flow modelling in a running human body.

    Science.gov (United States)

    Szabó, Viktor; Halász, Gábor

    2017-07-01

    In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.

  19. 3D Finite Element Simulation of Micro End-Milling by Considering the Effect of Tool Run-Out

    DEFF Research Database (Denmark)

    Davoudinejad, Ali; Tosello, Guido; Parenti, Paolo

    2017-01-01

    Understanding the micro milling phenomena involved in the process is critical and difficult through physical experiments. This study presents a 3D finite element modeling (3D FEM) approach for the micro end-milling process on Al6082-T6. The proposed model employs a Lagrangian explicit finite...... element formulation to perform coupled thermo-mechanical transient analyses. FE simulations were performed at different cutting conditions to obtain realistic numerical predictions of chip formation, temperature distribution, and cutting forces by considering the effect of tool run-out in the model....... The predicted results of the model, involving the run-out influence, showed a good correlation with experimental chip formation and the signal shape of cutting forces....

  20. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  1. How to help CERN to run more simulations

    CERN Multimedia

    The LHC@home team

    2016-01-01

    With LHC@home you can actively contribute to the computing capacity of the Laboratory!   You may think that CERN's large Data Centre and the Worldwide LHC Computing Grid have enough computing capacity for all the Laboratory’s users. However, given the massive amount of data coming from LHC experiments and other sources, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. This is an area where you can help, by installing BOINC and running simulations from LHC@home on your office PC or laptop. These background simulations will not disturb your work, as BOINC can be configured to automatically stop computing when your PC is in use. As mentioned in earlier editions of the Bulletin (see here and here), contributions from LHC@home volunteers have played a major role in LHC beam simulation studies. The computing capacity they made available corresponds to about half the capacity of the CERN...

  2. Implementation of angular response function modeling in SPECT simulations with GATE

    International Nuclear Information System (INIS)

    Descourt, P; Visvikis, D; Carlier, T; Bardies, M; Du, Y; Song, X; Frey, E C; Tsui, B M W; Buvat, I

    2010-01-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  3. Implementation of angular response function modeling in SPECT simulations with GATE

    Energy Technology Data Exchange (ETDEWEB)

    Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)

    2010-05-07

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  4. Giving students the run of sprinting models

    Science.gov (United States)

    Heck, André; Ellermeijer, Ton

    2009-11-01

    A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.

  5. Defining epidemics in computer simulation models: How do definitions influence conclusions?

    Directory of Open Access Journals (Sweden)

    Carolyn Orbann

    2017-06-01

    Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.

  6. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  7. 12 weeks of simulated barefoot running changes foot-strike patterns in female runners.

    Science.gov (United States)

    McCarthy, C; Fleming, N; Donne, B; Blanksby, B

    2014-05-01

    To investigate the effect of a transition program of simulated barefoot running (SBR) on running kinematics and foot-strike patterns, female recreational athletes (n=9, age 29 ± 3 yrs) without SBR experience gradually increased running distance in Vibram FiveFingers SBR footwear over 12 weeks. Matched controls (n=10, age 30 ± 4 yrs) continued running in standard footwear. A 3-D motion analysis of treadmill running at 12 km/h(-1) was performed by both groups, barefoot and shod, pre- and post-intervention. Post-intervention data indicated a more-forefoot strike pattern in the SBR group compared to controls; both running barefoot (P>0.05), and shod (Pstrike (Pforefoot strike pattern and "barefoot" kinematics, regardless of preferred footwear. © Georg Thieme Verlag KG Stuttgart · New York.

  8. A High-Speed Train Operation Plan Inspection Simulation Model

    Directory of Open Access Journals (Sweden)

    Yang Rui

    2018-01-01

    Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.

  9. Effects of running with backpack loads during simulated gravitational transitions: Improvements in postural control

    Science.gov (United States)

    Brewer, Jeffrey David

    The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.

  10. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  11. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  12. Running scenarios using the Waste Tank Safety and Operations Hanford Site model

    International Nuclear Information System (INIS)

    Stahlman, E.J.

    1995-11-01

    Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples

  13. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    Science.gov (United States)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  14. Real-time model for simulating a tracked vehicle on deformable soils

    Directory of Open Access Journals (Sweden)

    Martin Meywerk

    2016-05-01

    Full Text Available Simulation is one possibility to gain insight into the behaviour of tracked vehicles on deformable soils. A lot of publications are known on this topic, but most of the simulations described there cannot be run in real-time. The ability to run a simulation in real-time is necessary for driving simulators. This article describes an approach for real-time simulation of a tracked vehicle on deformable soils. The components of the real-time model are as follows: a conventional wheeled vehicle simulated in the Multi Body System software TRUCKSim, a geometric description of landscape, a track model and an interaction model between track and deformable soils based on Bekker theory and Janosi–Hanamoto, on one hand, and between track and vehicle wheels, on the other hand. Landscape, track model, soil model and the interaction are implemented in MATLAB/Simulink. The details of the real-time model are described in this article, and a detailed description of the Multi Body System part is omitted. Simulations with the real-time model are compared to measurements and to a detailed Multi Body System–finite element method model of a tracked vehicle. An application of the real-time model in a driving simulator is presented, in which 13 drivers assess the comfort of a passive and an active suspension of a tracked vehicle.

  15. Development of a fast running accident analysis computer program for use in a simulator

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1985-01-01

    This paper describes how a reactor safety nuclear computer program can be modified and improved with the aim of reaching a very fast running tool to be used as a physical model in a plant simulator, without penalizing the accuracy of results. It also discusses some ideas on how the physical theoretical model can be combined to a driving statistical tool for the build up of the entire package of software to be implemented in the simulator for risk and reliability analysis. The approach to the problem, although applied to a specific computer program, can be considered quite general if an already existing and well tested code is being used for the purpose. The computer program considered is ALMOD, originally developed for the analysis of the thermohydraulic and neutronic behaviour of the reactor core, primary circuit and steam generator during operational and special transients. (author)

  16. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    Science.gov (United States)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  17. CMB constraints on running non-Gaussianity

    OpenAIRE

    Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola

    2017-01-01

    We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}

  18. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Science.gov (United States)

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  19. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  20. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  1. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  2. Modelling Energy Loss Mechanisms and a Determination of the Electron Energy Scale for the CDF Run II W Mass Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)

    2012-06-15

    The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → eve channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c2 and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c2. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.

  3. Mathematical model simulation of a diesel spill in the Potomac River

    International Nuclear Information System (INIS)

    Feng, S.S.; Nicolette, J.P.; Markarian, R.K.

    1995-01-01

    A mathematical modeling technique was used to simulate the transport and fate of approximately 400,000 gallons of spilled diesel fuel and its impact on the aquatic biota in the Potomac River and Sugarland Run. Sugarland Run is a tributary about 21 miles upstream from Washington, DC. The mass balance model predicted the dynamic (spatial and temporal) distribution of spilled oil. The distributions were presented in terms of surface oil slick and sheen, dissolved and undissolved total petroleum hydrocarbons (TPH) in the water surface, water column, river sediments, shoreline and atmosphere. The processes simulated included advective movement, dispersion, dissolution, evaporation, volatilization, sedimentation, shoreline deposition, biodegradation, and removal of oil from cleanup operations. The model predicted that the spill resulted in a water column dissolved TPH concentration range of 0.05 to 18.6 ppm in Sugarland Run. The spilled oil traveled 10 miles along Sugarland Run before it reached the Potomac River. At the Potomac River, the water column TPH concentration was predicted to have decreased to the range of 0.0 to 0.43 ppm. These levels were consistent with field samples. To assess biological injury, the model used 4, 8, 24, 48, and 96-hr LC values in computing the fish injury caused by the fuel oil. The model used the maximum running average of dissolved TPH and exposure time to predict levels of fish mortality in the range of 38 to 40% in Sugarland Run. This prediction was consistent with field fisheries surveys. The model also computed the amount of spilled oil that adsorbed and settled into the river sediments

  4. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  5. A rolling constraint reproduces ground reaction forces and moments in dynamic simulations of walking, running, and crouch gait.

    Science.gov (United States)

    Hamner, Samuel R; Seth, Ajay; Steele, Katherine M; Delp, Scott L

    2013-06-21

    Recent advances in computational technology have dramatically increased the use of muscle-driven simulation to study accelerations produced by muscles during gait. Accelerations computed from muscle-driven simulations are sensitive to the model used to represent contact between the foot and ground. A foot-ground contact model must be able to calculate ground reaction forces and moments that are consistent with experimentally measured ground reaction forces and moments. We show here that a rolling constraint can model foot-ground contact and reproduce measured ground reaction forces and moments in an induced acceleration analysis of muscle-driven simulations of walking, running, and crouch gait. We also illustrate that a point constraint and a weld constraint used to model foot-ground contact in previous studies produce inaccurate reaction moments and lead to contradictory interpretations of muscle function. To enable others to use and test these different constraint types (i.e., rolling, point, and weld constraints) we have included them as part of an induced acceleration analysis in OpenSim, a freely-available biomechanics simulation package. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  7. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with agentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...

  8. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with a gentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...

  9. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  10. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  11. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  12. Running-mass inflation model and primordial black holes

    International Nuclear Information System (INIS)

    Drees, Manuel; Erfani, Encieh

    2011-01-01

    We revisit the question whether the running-mass inflation model allows the formation of Primordial Black Holes (PBHs) that are sufficiently long-lived to serve as candidates for Dark Matter. We incorporate recent cosmological data, including the WMAP 7-year results. Moreover, we include ''the running of the running'' of the spectral index of the power spectrum, as well as the renormalization group ''running of the running'' of the inflaton mass term. Our analysis indicates that formation of sufficiently heavy, and hence long-lived, PBHs still remains possible in this scenario. As a by-product, we show that the additional term in the inflaton potential still does not allow significant negative running of the spectral index

  13. MACC regional multi-model ensemble simulations of birch pollen dispersion in Europe

    NARCIS (Netherlands)

    Sofiev, M.; Berger, U.; Prank, M.; Vira, J.; Arteta, J.; Belmonte, J.; Bergmann, K.C.; Chéroux, F.; Elbern, H.; Friese, E.; Galan, C.; Gehrig, R.; Khvorostyanov, D.; Kranenburg, R.; Kumar, U.; Marécal, V.; Meleux, F.; Menut, L.; Pessi, A.M.; Robertson, L.; Ritenberga, O.; Rodinkova, V.; Saarto, A.; Segers, A.; Severova, E.; Sauliene, I.; Siljamo, P.; Steensen, B.M.; Teinemaa, E.; Thibaudon, M.; Peuch, V.H.

    2015-01-01

    This paper presents the first ensemble modelling experiment in relation to birch pollen in Europe. The seven-model European ensemble of MACC-ENS, tested in trial simulations over the flowering season of 2010, was run through the flowering season of 2013. The simulations have been compared with

  14. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    Science.gov (United States)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  15. The natural oscillation of two types of ENSO events based on analyses of CMIP5 model control runs

    Science.gov (United States)

    Xu, Kang; Su, Jingzhi; Zhu, Congwen

    2014-07-01

    The eastern- and central-Pacific El Niño-Southern Oscillation (EP- and CP-ENSO) have been found to be dominant in the tropical Pacific Ocean, and are characterized by interannual and decadal oscillation, respectively. In the present study, we defined the EP- and CP-ENSO modes by singular value decomposition (SVD) between SST and sea level pressure (SLP) anomalous fields. We evaluated the natural features of these two types of ENSO modes as simulated by the pre-industrial control runs of 20 models involved in phase five of the Coupled Model Intercomparison Project (CMIP5). The results suggested that all the models show good skill in simulating the SST and SLP anomaly dipolar structures for the EP-ENSO mode, but only 12 exhibit good performance in simulating the tripolar CP-ENSO modes. Wavelet analysis suggested that the ensemble principal components in these 12 models exhibit an interannual and multi-decadal oscillation related to the EP- and CP-ENSO, respectively. Since there are no changes in external forcing in the pre-industrial control runs, such a result implies that the decadal oscillation of CP-ENSO is possibly a result of natural climate variability rather than external forcing.

  16. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  17. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  18. Modeling and simulation of water flow on containment walls with inhomogeneous contact angle distribution

    International Nuclear Information System (INIS)

    Amend, Katharina; Klein, Markus

    2017-01-01

    The paper presents a three-dimensional numerical simulation for water running down inclined surfaces using OpenFOAM. This research project aims at developing a CFD model to describe the run down behavior of liquids and the resulting wash down of fission products on surfaces in the reactor containment. An empirical contact angle model with wetted history is introduced as well as a filtered randomized initial contact angle field. Simulation results are in good agreement with the experiments. Experimental Investigation on Passive.

  19. Modeling and simulation of water flow on containment walls with inhomogeneous contact angle distribution

    Energy Technology Data Exchange (ETDEWEB)

    Amend, Katharina; Klein, Markus [Univ. der Bundeswehr Muenchen, Neubiberg (Germany). Inst. for Numerical Methods in Aerospace Engineering

    2017-07-15

    The paper presents a three-dimensional numerical simulation for water running down inclined surfaces using OpenFOAM. This research project aims at developing a CFD model to describe the run down behavior of liquids and the resulting wash down of fission products on surfaces in the reactor containment. An empirical contact angle model with wetted history is introduced as well as a filtered randomized initial contact angle field. Simulation results are in good agreement with the experiments. Experimental Investigation on Passive.

  20. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    Science.gov (United States)

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  1. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  2. Development of the core-model implementation technology for YGN1 simulator

    International Nuclear Information System (INIS)

    Hong, J. H.; Lee, M. S.; Lee, Y. K.; Su, I. Y.

    2004-01-01

    The existing core models for the domestic nuclear power plant simulators for PWRs are entirely imported from the foreign simulator vendor. To solve the time-accuracy problem in the poor capabilities in the computer in the early 1990s, several simplifications and assumptions for the neutronics governing equations were indispensible for the realtime calculations of nuclear phenomena in the core region. To overcome the shortages, a new core model based on the MASTER code certified by the domestic regulatory body (KINS) instead of the existing core models is now being developed especially for the realtime core solver for the YGN-1 simulator. This code is named R-MASTER (Realtime MASTER code). Due to the deficiency of the host computer, it is quitely required to run the R-MASTER code on the separate computer with high performance from the host computer on which all the other models than the core model are running. This paper deals with the applied protocols and procedures to guarantee the realtime communication and calculation of the R-MASTER code

  3. Comparison of minimalist footwear strategies for simulating barefoot running: a randomized crossover study.

    Directory of Open Access Journals (Sweden)

    Karsten Hollander

    Full Text Available Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1 performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1 using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001 at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001 between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1, followed by running with uncushioned minimalist shoes (62.9%, cushioned minimalist (88.6% and standard shoes (94.3%. Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.

  4. The 2014 Lake Askja rockslide tsunami - optimization of landslide parameters comparing numerical simulations with observed run-up

    Science.gov (United States)

    Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn

    2016-04-01

    The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run

  5. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  6. Stabilising the global greenhouse. A simulation model

    International Nuclear Information System (INIS)

    Michaelis, P.

    1993-01-01

    This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecolocially determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically specified and adapted to a dynamic Gams-algorithm. By various simulation runs for the period of 1990 to 2110, the economics of greenhouse gas accumulation are explored. In particular, the long-run cost associated with the above stabilisation target are evaluated for three different policy scenarios: i) A comprehensive approach that covers all major greenhouse gases simultaneously, ii) a piecemeal approach that is limited to reducing CO 2 emissions, and iii) a ten-year moratorium that postpones abatement effort until new scientific evidence on the greenhouse effect will become available. Comparing the simulation results suggests that a piecemeal approach would considerably increase total cost, whereas a ten-year moratorium might be reasonable even if the probability of 'good news' is comparatively small. (orig.)

  7. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  8. Comparison of minimalist footwear strategies for simulating barefoot running: a randomized crossover study.

    Science.gov (United States)

    Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid

    2015-01-01

    Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1)) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1)) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (prunning velocities. Posthoc pairwise comparisons showed significant differences (prunning barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1)), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.

  9. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  10. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  11. Simulation and analysis of a model dinoflagellate predator-prey system

    Science.gov (United States)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  12. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  13. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  14. Modelling surface run-off and trends analysis over India

    Indian Academy of Sciences (India)

    exponential model was developed between the rainfall and the run-off that predicted the run-off with an R2 of ... precipitation and other climate parameters is well documented ...... Sen P K 1968 Estimates of the regression coefficient based.

  15. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    Science.gov (United States)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape

  16. Simulations of flow and prediction of sediment movement in Wymans Run, Cochranton Borough, Crawford County, Pennsylvania

    Science.gov (United States)

    Hittle, Elizabeth

    2011-01-01

    In small watersheds, runoff entering local waterways from large storms can cause rapid and profound changes in the streambed that can contribute to flooding. Wymans Run, a small stream in Cochranton Borough, Crawford County, experienced a large rain event in June 2008 that caused sediment to be deposited at a bridge. A hydrodynamic model, Flow and Sediment Transport and Morphological Evolution of Channels (FaSTMECH), which is incorporated into the U.S. Geological Survey Multi-Dimensional Surface-Water Modeling System (MD_SWMS) was constructed to predict boundary shear stress and velocity in Wymans Run using data from the June 2008 event. Shear stress and velocity values can be used to indicate areas of a stream where sediment, transported downstream, can be deposited on the streambed. Because of the short duration of the June 2008 rain event, streamflow was not directly measured but was estimated using U.S. Army Corps of Engineers one-dimensional Hydrologic Engineering Centers River Analysis System (HEC-RAS). Scenarios to examine possible engineering solutions to decrease the amount of sediment at the bridge, including bridge expansion, channel expansion, and dredging upstream from the bridge, were simulated using the FaSTMECH model. Each scenario was evaluated for potential effects on water-surface elevation, boundary shear stress, and velocity.

  17. Tropical Cyclones in the 7km NASA Global Nature Run for use in Observing System Simulation Experiments

    Science.gov (United States)

    Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M.; Partyka, Gary

    2018-01-01

    The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community. PMID:29674806

  18. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  19. CASY: a dynamic simulation of the gas-cooled fast breeder reactor core auxiliary cooling system. Volume II. Example computer run

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds.

  20. CASY: a dynamic simulation of the gas-cooled fast breeder reactor core auxiliary cooling system. Volume II. Example computer run

    International Nuclear Information System (INIS)

    1979-09-01

    A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds

  1. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  2. Speeding up N-body simulations of modified gravity: chameleon screening models

    Science.gov (United States)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  3. Speeding up N -body simulations of modified gravity: chameleon screening models

    International Nuclear Information System (INIS)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio; Barreira, Alexandre; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo

    2017-01-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512 3 particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  4. Speeding up N -body simulations of modified gravity: chameleon screening models

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  5. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    Science.gov (United States)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  6. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    International Nuclear Information System (INIS)

    Muhin, U.; Belskij, S.; Makarov, E.; Koinov, T.

    2013-01-01

    Full text: A mathematical model of the thermal state of the metal on the run-out roller table of a continuous wide hot-strip mill is presented. The mathematical model takes into account the heat generation during the polymorphic γ → α transformation of super cooled austenite phase and the influence of chemical composition on the physical properties of the steel. The model allows the calculation of modes of accelerated cooling of strips on the run-out roller table of a continuous wide hot strip mill. Winding temperature calculation error does not exceed 20 °C for 98.5 % of the strips from low-carbon and low-alloyed steels. key words: hot rolled, wide-strip, accelerated cooling, run-out roller table, polymorphic transformation, mathematical modeling

  7. Modelling of thermalhydraulics and reactor physics in simulators

    International Nuclear Information System (INIS)

    Miettinen, J.

    1994-01-01

    The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)

  8. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  9. Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator

    Science.gov (United States)

    Lewis, Emily K.; Vuong, Nghia D.

    2012-01-01

    This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.

  10. Comparing internal and external run-time coupling of CFD and building energy simulation software

    NARCIS (Netherlands)

    Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.

    2004-01-01

    This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external

  11. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  12. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    Science.gov (United States)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  13. Design of simulation builder software to support the enterprise modeling and simulation task of the AMTEX program

    Energy Technology Data Exchange (ETDEWEB)

    Nolan, M.; Lamont, A.; Chang, L.

    1995-12-12

    This document describes the implementation of the Simulation Builder developed as part of the Enterprise Modeling and Simulation (EM&S) portion of the Demand Activated Manufacturing Architecture (DAMA) project. The Simulation Builder software allows users to develop simulation models using pre-defined modules from a library. The Simulation Builder provides the machinery to allow the modules to link together and communicate information during the simulation run. This report describes the basic capabilities and structure of the Simulation Builder to assist a user in reviewing and using the code. It also describes the basic steps to follow when developing modules to take advantage of the capabilities provided by the Simulation Builder. The Simulation Builder software is written in C++. The discussion in this report assumes a sound understanding of the C++ language. Although this report describes the steps to follow when using the Simulation Builder, it is not intended to be a tutorial for a user unfamiliar with C++.

  14. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  15. Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model

    International Nuclear Information System (INIS)

    Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.

    2008-01-01

    The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)

  16. Towards a numerical run-out model for quick-clay slides

    Science.gov (United States)

    Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.

    2015-04-01

    quasi-three-dimensional codes with a choice of bed-friction laws. The findings of the simulations point strongly towards the need for a different modeling approach that incorporates the essential physical features of quick-clay slides. The major requirement is a realistic description of remolding. A two-layer model is needed to describe the non-sensitive topsoil that often is passively advected by the slide. In many cases, the topography is rather complex so that 3D or quasi-3D (depth-averaged) models are required for realistic modeling of flow heights and velocities. Finally, since many Norwegian quick-clay slides run-out in a fjord (and may generate a tsunami), it is also desirable to explicitly account for buoyancy and hydrodynamic drag.

  17. Construction and simulation of a novel continuous traffic flow model

    International Nuclear Information System (INIS)

    Hwang, Yao-Hsin; Yu, Jui-Ling

    2017-01-01

    In this paper, we aim to propose a novel mathematical model for traffic flow and apply a newly developed characteristic particle method to solve the associate governing equations. As compared with the existing non-equilibrium higher-order traffic flow models, the present one is put forward to satisfy the following three conditions: 1.Preserve the equilibrium state in the smooth region. 2.Yield an anisotropic propagation of traffic flow information. 3.Expressed with a conservation law form for traffic momentum. These conditions will ensure a more practical simulation in traffic flow physics: The current traffic will not be influenced by the condition in the behind and result in unambiguous condition across a traffic shock. Through analyses of characteristics, stability condition and steady-state solution adherent to the equation system, it is shown that the proposed model actually conform to these conditions. Furthermore, this model can be cast into its characteristic form which, incorporated with the Rankine-Hugoniot relation, is appropriate to be simulated by the characteristic particle method to obtain accurate computational results. - Highlights: • The traffic model expressed with the momentum conservation law. • Traffic flow information propagate anisotropically and preserve the equilibrium state in the smooth region. • Computational particles of two families are invented to mimic forward-running and backward-running characteristics. • Formation of shocks will be naturally detected by the intersection of computational particles of same family. • A newly developed characteristic particle method is used to simulate traffic flow model equations.

  18. Running vacuum cosmological models: linear scalar perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  19. Effects of Yaw Error on Wind Turbine Running Characteristics Based on the Equivalent Wind Speed Model

    Directory of Open Access Journals (Sweden)

    Shuting Wan

    2015-06-01

    Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.

  20. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  1. Statistical 3D damage accumulation model for ion implant simulators

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided

  2. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    Science.gov (United States)

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Application of the Kineros model for predicting the effect of land use on the surface run-off Case study in Brantas sub-watershed, Klojen District, Malang City, East Java Province of Indonesia

    Directory of Open Access Journals (Sweden)

    Bisri Mohammad

    2017-12-01

    Full Text Available This study intended to illustrate the distribution of surface run-off. The methodology was by using Kineros model (kinetic run-off and erosion model. This model is a part of AGWA program which is as the development of ESRI ArcView SIG software that is as a tool for analysing hydrological phenomena in research about watershed simulating the process of infiltration, run-off depth, and erosion in a watershed of small scale such as ≤100 km2. The procedures are as follow: to analyse the run-off depth in Brantas sub-watershed, Klojen District by using Kineros model based on the land use change due to the rainfall simulation with the return period of 2 years, 5 years, 10 years, and 25 years. Results show that the difference of land use affect the surface run-off or there is the correlation between land use and surface run-off depth. The maximum surface run-off depth in the year 2000 was 134.26 mm; in 2005 it was 139.36 mm; and in 2010 it was 142.76 mm. There was no significant difference between Kineros model and observation in field, the relative error was only 9.09%.

  4. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  5. Parametric model of ventilators simulated in OpenFOAM and Elmer

    Science.gov (United States)

    Čibera, Václav; Matas, Richard; Sedláček, Jan

    2016-03-01

    The main goal of presented work was to develop parametric model of a ventilator for CFD and structural analysis. The whole model was designed and scripted in freely available open source programmes in particular in OpenFOAM and Elmer. The main script, which runs or generates other scripts and further control the course of simulation, was written in bash scripting language in Linux environment. Further, the scripts needed for a mesh generation and running of a simulation were prepared using m4 word pre-processor. The use of m4 allowed comfortable set up of the higher amount of scripts. Consequently, the mesh was generated for fluid and solid part of the ventilator within OpenFOAM. Although OpenFOAM offers also a few tools for structural analysis, the mesh of solid parts was transferred into Elmer mesh format with the aim to perform structural analysis in this software. This submitted paper deals namely with part concerning fluid flow through parametrized geometry with different initial conditions. As an example, two simulations were conducted for the same geometric parameters and mesh but for different angular velocity of ventilator rotation.

  6. Parametric model of ventilators simulated in OpenFOAM and Elmer

    Directory of Open Access Journals (Sweden)

    Čibera Václav

    2016-01-01

    Full Text Available The main goal of presented work was to develop parametric model of a ventilator for CFD and structural analysis. The whole model was designed and scripted in freely available open source programmes in particular in OpenFOAM and Elmer. The main script, which runs or generates other scripts and further control the course of simulation, was written in bash scripting language in Linux environment. Further, the scripts needed for a mesh generation and running of a simulation were prepared using m4 word pre-processor. The use of m4 allowed comfortable set up of the higher amount of scripts. Consequently, the mesh was generated for fluid and solid part of the ventilator within OpenFOAM. Although OpenFOAM offers also a few tools for structural analysis, the mesh of solid parts was transferred into Elmer mesh format with the aim to perform structural analysis in this software. This submitted paper deals namely with part concerning fluid flow through parametrized geometry with different initial conditions. As an example, two simulations were conducted for the same geometric parameters and mesh but for different angular velocity of ventilator rotation.

  7. Statistics for long irregular wave run-up on a plane beach from direct numerical simulations

    Science.gov (United States)

    Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys

    2017-04-01

    Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface

  8. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  9. Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations

    Science.gov (United States)

    Sung, Christopher Teh Boon

    2011-01-01

    Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…

  10. An Advanced HIL Simulation Battery Model for Battery Management System Testing

    DEFF Research Database (Denmark)

    Barreras, Jorge Varela; Fleischer, Christian; Christensen, Andreas Elkjær

    2016-01-01

    Developers and manufacturers of battery management systems (BMSs) require extensive testing of controller Hardware (HW) and Software (SW), such as analog front-end and performance of generated control code. In comparison with the tests conducted on real batteries, tests conducted on a state......-of-the-art hardware-in-the-loop (HIL) simulator can be more cost and time effective, easier to reproduce, and safer beyond the normal range of operation, especially at early stages in the development process or during fault insertion. In this paper, an HIL simulation battery model is developed for purposes of BMS...... testing on a commercial HIL simulator. A multicell electrothermal Li-ion battery (LIB) model is integrated in a system-level simulation. Then, the LIB system model is converted to C code and run in real time with the HIL simulator. Finally, in order to demonstrate the capabilities of the setup...

  11. Advanced char burnout models for the simulation of pulverized coal fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    T. Severin; S. Wirtz; V. Scherer [Ruhr-University, Bochum (Germany). Institute of Energy Plant Technology (LEAT)

    2005-07-01

    The numerical simulation of coal combustion processes is widely used as an efficient means to predict burner or system behaviour. In this paper an approach to improve CFD simulations of pulverized coal fired boilers with advanced coal combustion models is presented. In simple coal combustion models, first order Arrhenius rate equations are used for devolatilization and char burnout. The accuracy of such simple models is sufficient for the basic aspects of heat release. The prediction of carbon-in-ash is one aspect of special interest in the simulation of pulverized coal fired boilers. To determine the carbon-in-ash levels in the fly ash of coal fired furnaces, the char burnout model has to be more detailed. It was tested, in how far changing operating conditions affect the carbon-in-ash prediction of the simulation. To run several test cases in a short time, a simplified cellnet model was applied. To use a cellnet model for simulations of pulverized coal fired boilers, it was coupled with a Lagrangian particle model, used in CFD simulations, too. 18 refs., 5 figs., 5 tabs.

  12. Simulations of the Mid-Pliocene Warm Period Using Two Versions of the NASA-GISS ModelE2-R Coupled Model

    Science.gov (United States)

    Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.

    2013-01-01

    The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led

  13. How reliable is the offline couple of WRF and VIC model? And how does high quality land cover data impact the VIC model simulation?

    Science.gov (United States)

    Tang, C.; Dennis, R. L.

    2012-12-01

    First, the ability of the offline coupling of Weather Research & Forecasting Model (WRF) and Variable Infiltration Capacity (VIC) model to produce hydrological and climate variables was evaluated. The performance of the offline couple of WRF and VIC was assessed with respect to key simulated variables through a comparison with the calibrated VIC model simulation. A spatiotemporal comparison of the simulated evaporation (ET), soil moisture (SM), runoff, and baseflow produced by the VIC calibrated run (base data) and by the offline coupling run was conducted. The results showed that the offline couple of VIC with WRF was able to achieve good agreement in the simulation of monthly and daily soil moisture, and monthly evaporation. This suggests the VIC coupling should function without causing a large change in the moisture budget. However, the offline coupling showed most disagreement in daily and monthly runoff, and baseflow which is related to errors in WRF precipitation. Second, the sensitivity of the VIC model to the land cover was assessed by performing a sensitivity simulation using the National Land Cover Database (NLCD) instead of the older NLDAS/AVHRR data. Improved land cover is shown to achieve more accurate simulation of the streamflow.

  14. A virtual laboratory notebook for simulation models.

    Science.gov (United States)

    Winfield, A J

    1998-01-01

    In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.

  15. Connectionist agent-based learning in bank-run decision making

    Science.gov (United States)

    Huang, Weihong; Huang, Qiao

    2018-05-01

    It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.

  16. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    International Nuclear Information System (INIS)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-01

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model

  17. Run-time calibration of simulation models by integrating remote sensing estimates of leaf area index and canopy nitrogen

    NARCIS (Netherlands)

    Jongschaap, R.E.E.

    2006-01-01

    Dynamic simulations models may enable for farmers the evaluation of crop and soil management strategies, or may trigger crop and soil management strategies if they are used as warning systems, e.g. for drought risks and for nutrients shortage. Predictions by simulation models may differ from field

  18. An improved ENSO simulation by representing chlorophyll-induced climate feedback in the NCAR Community Earth System Model.

    Science.gov (United States)

    Kang, Xianbiao; Zhang, Rong-Hua; Gao, Chuan; Zhu, Jieshun

    2017-12-07

    The El Niño-Southern oscillation (ENSO) simulated in the Community Earth System Model of the National Center for Atmospheric Research (NCAR CESM) is much stronger than in reality. Here, satellite data are used to derive a statistical relationship between interannual variations in oceanic chlorophyll (CHL) and sea surface temperature (SST), which is then incorporated into the CESM to represent oceanic chlorophyll -induced climate feedback in the tropical Pacific. Numerical runs with and without the feedback (referred to as feedback and non-feedback runs) are performed and compared with each other. The ENSO amplitude simulated in the feedback run is more accurate than that in the non-feedback run; quantitatively, the Niño3 SST index is reduced by 35% when the feedback is included. The underlying processes are analyzed and the results show that interannual CHL anomalies exert a systematic modulating effect on the solar radiation penetrating into the subsurface layers, which induces differential heating in the upper ocean that affects vertical mixing and thus SST. The statistical modeling approach proposed in this work offers an effective and economical way for improving climate simulations.

  19. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  20. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-01-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  1. SAPS simulation with GITM/UCLA-RCM coupled model

    Science.gov (United States)

    Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.

    2017-12-01

    Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.

  2. Virtual Systems Pharmacology (ViSP software for mechanistic system-level model simulations

    Directory of Open Access Journals (Sweden)

    Sergey eErmakov

    2014-10-01

    Full Text Available Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user’s particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  3. WRF Mesoscale Pre-Run for the Wind Atlas of Mexico

    OpenAIRE

    Hahmann, Andrea N.; Pena Diaz, Alfredo; Hansen, Jens Carsten

    2016-01-01

    This report documents the work performed by DTU Wind Energy for the project “Atlas Eólico Mexicano” or the Wind Atlas of Mexico. This document reports on the methods used in “Pre-run” of the windmapping project for Mexico. The interim mesoscale modeling results were calculated from the output of simulations using the Weather, Research and Forecasting (WRF) model. We document the method used to run the mesoscale simulations and to generalize the WRF model wind climatologies. A separate section...

  4. Co-simulation of dynamic systems in parallel and serial model configurations

    International Nuclear Information System (INIS)

    Sweafford, Trevor; Yoon, Hwan Sik

    2013-01-01

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  5. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    Science.gov (United States)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  6. Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata

    Directory of Open Access Journals (Sweden)

    Yu Zou

    2013-07-01

    Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.

  7. Intercomparison of Streamflow Simulations between WRF-Hydro and Hydrology Laboratory-Research Distributed Hydrologic Model Frameworks

    Science.gov (United States)

    KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.

    2017-12-01

    The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.

  8. Real-time modelling and simulation of an active power filter

    Energy Technology Data Exchange (ETDEWEB)

    Beaulieu, S.; Ouhrouche, M. [Quebec Univ., Chicoutimi, PQ (Canada); Dufour, C.; Allaire, P.F. [Opal RT Technologies Inc., Montreal, PQ (Canada)

    2007-07-01

    Power electronics converters generate harmonics and cause electromagnetic compatibility problems. Active power filter (APF) technology has advanced to the point that it can compensate for harmonics in electrical networks and provide reactive power and neutral current in AC networks. This paper presented a contribution in the design of a shunt APF for harmonics compensation in real-time simulation using the RT-LAB software package running on a simple personal computer. Real-time simulations were performed to validate the effectiveness of the proposed model. Several high-tech industries have adopted this tool for rapid control prototyping and for Hardware-in-the-Loop applications. The switching signals of the APF are determined by the hysteresis band current controller. The suitable current reference signals were determined by the algorithm based on synchronous reference frame. Real-time simulation runs showed good performance in harmonics compensation, thus satisfying the requirements of IEEE Standard 519-1992. The rate of total harmonic distortion for the source current decreased from 30 to 5 per cent. 12 refs., 1 tab., 9 figs.

  9. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    Directory of Open Access Journals (Sweden)

    E.Makarov

    2016-07-01

    Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels

  10. Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems

    Science.gov (United States)

    Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.

    2016-12-01

    We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.

  11. Do we need full mesoscale models to simulate the urban heat island? A study over the city of Barcelona.

    Science.gov (United States)

    García-Díez, Markel; Ballester, Joan; De Ridder, Koen; Hooyberghs, Hans; Lauwaet, Dirk; Rodó, Xavier

    2016-04-01

    As most of the population lives in urban environments, the simulation of the urban climate has become an important part of the global climate change impact assessment. However, due to the high resolution required, these simulations demand a large amount of computational resources. Here we present a comparison between a simplified fast urban climate model (UrbClim) and a widely used full mesoscale model, the Weather Research and Forecasting (WRF) model, over the city of Barcelona. In order to check the advantages and disadvantages of each approach, both simulations were compared with station data and with land surface temperature observations retrieved by satellites, focusing on the urban heat island. The effect of changing the UrbClim boundary conditions was studied too, by using low resolution global reanalysis data (70 km) and a higher resolution forecast model (15 km). Finally, a strict comparison of the computational resources consumed by both models was carried out. Results show that, generally, the performance of the simple model is comparable to or better than the mesoscale model. The exception are the winds and the day-to-day correlation in the reanalysis driven run, but these problems disappear when taking the boundary conditions from a higher resolution global model. UrbClim was found to run 133 times faster than WRF, using 4x times higher resolution and, thus, it is an efficient solution for running long climate change simulations over large city ensembles.

  12. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  13. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  14. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Science.gov (United States)

    Rummukainen, M.; Räisänen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willén, U.; Hansson, U.; Jones, C.

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results.

  15. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rummukainen, M.; Raeisaenen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willen, U.; Hansson, U.; Jones, C. [Rossby Centre, Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden)

    2001-03-01

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results. (orig.)

  16. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  17. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  18. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    Science.gov (United States)

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  19. Modelling the long-run supply of coal

    International Nuclear Information System (INIS)

    Steenblik, R.P.

    1992-01-01

    There are many issues facing policy-makers in the fields of energy and the environment that require knowledge of coal supply and cost. Such questions arise in relation to decisions concerning, for example, the discontinuation of subsidies, or the effects of new environmental laws. The very complexity of these questions makes them suitable for analysis by models. Indeed, models have been used for analysing the behaviour of coal markets and the effects of public policies on them for many years. For estimating short-term responses econometric models are the most suitable. For estimating the supply of coal over the longer term, however - i.e., coal that would come from mines as yet not developed - depletion has to be taken into account. Underlying the normal supply curve relating cost to the rate of production is a curve that increases with cumulative production - what mineral economists refer to as the potential supply curve. To derive such a curve requires at some point in the analysis using process-oriented modelling techniques. Because coal supply curves can convey so succinctly information about the resource's long-run supply potential and costs, they have been influential in several major public debates on energy policy. And, within the coal industry itself, they have proved to be powerful tools for undertaking market research and long-range planning. The purpose of this paper is to describe in brief the various approaches that have been used to model long-run coal supply, to highlight their strengths, and to identify areas in which further progress is needed. (author)

  20. Automated Object-Oriented Simulation Framework for Modelling of Superconducting Magnets at CERN

    CERN Document Server

    Maciejewski, Michał; Bartoszewicz, Andrzej

    The thesis aims at designing a flexible, extensible, user-friendly interface to model electro thermal transients occurring in superconducting magnets. Simulations are a fundamental tool for assessing the performance of a magnet and its protection system against the effects of a quench. The application is created using scalable and modular architecture based on object-oriented programming paradigm which opens an easy way for future extensions. What is more, each model composed of thousands of blocks is automatically created in MATLAB/Simulink. Additionally, the user is able to automatically run sets of simulations with varying parameters. Due to its scalability and modularity the framework can be easily used to simulate wide range of materials and magnet configurations.

  1. Uterus models for use in virtual reality hysteroscopy simulators.

    Science.gov (United States)

    Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias

    2009-05-01

    Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.

  2. Simulation of Mercury's magnetosheath with a combined hybrid-paraboloid model

    Science.gov (United States)

    Parunakian, David; Dyadechkin, Sergey; Alexeev, Igor; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku

    2017-08-01

    In this paper we introduce a novel approach for modeling planetary magnetospheres that involves a combination of the hybrid model and the paraboloid magnetosphere model (PMM); we further refer to it as the combined hybrid model. While both of these individual models have been successfully applied in the past, their combination enables us both to overcome the traditional difficulties of hybrid models to develop a self-consistent magnetic field and to compensate the lack of plasma simulation in the PMM. We then use this combined model to simulate Mercury's magnetosphere and investigate the geometry and configuration of Mercury's magnetosheath controlled by various conditions in the interplanetary medium. The developed approach provides a unique comprehensive view of Mercury's magnetospheric environment for the first time. Using this setup, we compare the locations of the bow shock and the magnetopause as determined by simulations with the locations predicted by stand-alone PMM runs and also verify the magnetic and dynamic pressure balance at the magnetopause. We also compare the results produced by these simulations with observational data obtained by the magnetometer on board the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft along a dusk-dawn orbit and discuss the signatures of the magnetospheric features that appear in these simulations. Overall, our analysis suggests that combining the semiempirical PMM with a self-consistent global kinetic model creates new modeling possibilities which individual models cannot provide on their own.

  3. A Lookahead Behavior Model for Multi-Agent Hybrid Simulation

    Directory of Open Access Journals (Sweden)

    Mei Yang

    2017-10-01

    Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.

  4. Modelling and simulation of floods in alpine catchments equipped with complex hydropower schemes

    OpenAIRE

    Bieri, Martin; Schleiss, Anton; Frankhauser, A.

    2010-01-01

    The simulation of run-off in an alpine catchment area equipped with complex hydropower schemes is presented by the help of an especially developed tool, called Routing System, which can combine hydrological modelling and operation of hydraulic elements. In the hydrological forecasting tool tridimensional rainfall, temperature and evapotranspiration distributions are taken into account for simulating the dominant hydrological processes, as glacier melt, snow pack constitution and melt, soil in...

  5. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 2. Simulation Analysis Using the Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  6. The Point Lepreau Desktop Simulator

    International Nuclear Information System (INIS)

    MacLean, M.; Hogg, J.; Newman, H.

    1997-01-01

    The Point Lepreau Desktop Simulator runs plant process modeling software on a 266 MHz single CPU DEC Alpha computer. This same Alpha also runs the plant control computer software on an SSCI 125 emulator. An adjacent Pentium PC runs the simulator's Instructor Facility software, and communicates with the Alpha through an Ethernet. The Point Lepreau Desktop simulator is constructed to be as similar as possible to the Point Lepreau full scope training simulator. This minimizes total maintenance costs and enhances the benefits of the desktop simulator. Both simulators have the same modeling running on a single CPU in the same schedule of calculations. Both simulators have the same Instructor Facility capable of developing and executing the same lesson plans, doing the same monitoring and control of simulations, inserting all the same malfunctions, performing all the same overrides, capable of making and restoring all the same storepoints. Both simulators run the same plant control computer software - the same assembly language control programs as the power plant uses for reactor control, heat transport control, annunciation, etc. This is a higher degree of similarity between a desktop simulator and a full scope training simulator than previously reported for a computer controlled nuclear plant. The large quantity of control room hardware missing from the desktop simulator is replaced by software. The Instructor Facility panel override software of the training simulator provides the means by which devices (switches, controllers, windows, etc.) on the control room panels can be controlled and monitored in the desktop simulator. The CRT of the Alpha provides a mouse operated DCC keyboard mimic for controlling the plant control computer emulation. Two emulated RAMTEK display channels appear as windows for monitoring anything of interest on plant DCC displays, including one channel for annunciation. (author)

  7. Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear

    Directory of Open Access Journals (Sweden)

    Michálek T.

    2015-06-01

    Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.

  8. Short-run and long-run elasticities of import demand for crude oil in Turkey

    International Nuclear Information System (INIS)

    Altinay, Galip

    2007-01-01

    The aim of this study is to attempt to estimate the short-run and the long-run elasticities of demand for crude oil in Turkey by the recent autoregressive distributed lag (ARDL) bounds testing approach to cointegration. As a developing country, Turkey meets its growing demand for oil principally by foreign suppliers. Thus, the study focuses on modelling the demand for imported crude oil using annual data covering the period 1980-2005. The bounds test results reveal that a long-run cointegration relationship exists between the crude oil import and the explanatory variables: nominal price and income, but not in the model that includes real price in domestic currency. The long-run parameters are estimated through a long-run static solution of the estimated ARDL model, and then the short-run dynamics are estimated by the error correction model. The estimated models pass the diagnostic tests successfully. The findings reveal that the income and price elasticities of import demand for crude oil are inelastic both in the short run and in the long run

  9. Statistical Emulator for Expensive Classification Simulators

    Science.gov (United States)

    Ross, Jerret; Samareh, Jamshid A.

    2016-01-01

    Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.

  10. Effect of audio in-vehicle red light-running warning message on driving behavior based on a driving simulator experiment.

    Science.gov (United States)

    Yan, Xuedong; Liu, Yang; Xu, Yongcun

    2015-01-01

    Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.

  11. A long run intertemporal model of the oil market with uncertainty and strategic interaction

    International Nuclear Information System (INIS)

    Lensberg, T.; Rasmussen, H.

    1991-06-01

    This paper describes a model of the long run price uncertainty in the oil market. The main feature of the model is that the uncertainty about OPEC's price strategy is assumed to be generated not by irrational behavior on the part of OPEC, but by uncertainty about OPEC's size and time preference. The control of OPEC's pricing decision is assumed to shift among a set of OPEC-types over time according to a stochastic process, with each type implementing that price strategy which best fits the interests of its supporters. The model is fully dynamic on the supply side in the sense that all oil producers are assumed to understand the working of OPEC and the oil market, in particular, the non-OPEC producers base their investment decisions on rational price expectations. On the demand side, we assume that the market insight is less developed on the average, and model it by means of a long run demand curve on current prices and a simple lag structure. The long run demand curve for crude oil is generated by a fairly detailed static long-run equilibrium model of the product markets. Preliminary experience with the model indicate that prices are likely to stay below 20 dollars in the foreseeable future, but that prices around 30 dollars may occur if the present long run time perspective of OPEC is abandoned in favor of a more short run one. 26 refs., 4 figs., 7 tabs

  12. TransCom model simulations of hourly atmospheric CO2: Experimental overview and diurnal cycle results for 2002

    NARCIS (Netherlands)

    Law, R. M.; Peters, W.; RöDenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S.-J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.

    2008-01-01

    A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model

  13. A mathematical model for the simulation of thermal transients in the water loop of IPEN

    International Nuclear Information System (INIS)

    Pontedeiro, A.C.

    1980-01-01

    A mathematical model for simulation of thermal transients in the water loop at the Instituto de Pesquisas Energeticas e Nucleares, Sao Paulo, Brasil, is developed. The model is based on energy equations applied to the components of the experimental water loop. The non-linear system of first order diferencial equations and of non-linear algebraic equations obtained through the utilization of the IBM 'System/360-Continous System Modeling Program' (CSMP) is resolved. An optimization of the running time of the computer is made and a typical simulation of the water loop is executed. (Author) [pt

  14. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  15. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  16. ROSA-III 100 % break integral test Run 914

    International Nuclear Information System (INIS)

    Yonomoto, Taisuke; Tasaka, Kanji; Koizumi, Yasuo; Anoda, Yoshinari; Kumamaru, Hiroshige; Nakamura, Hideo; Suzuki, Mitsuhiro; Murata, Hideo

    1987-05-01

    This report presents the experimental data of RUN 914 conducted at the ROSA-III test facility. The facility is a volumetrically scaled (1/424) simulator for a BWR/6 with the electrically heated core, the break simulator and the scaled ECCS (emergency core cooling system). RUN 914 was a 100% split break test at the recirculation pump suction line with an assumption of HPCS diesel generator failure and conducted as one of the break area parameter tests. A peak cladding temperature (PCT) of 851 K was reached at 130 s after the break during the reflooding phase. Whole core was completely quenched by ECCS, and the effectiveness of ECCS was confirmed. The primary test results of RUN 914 are compared in this report with those of RUN 926, which was a 200 % double-ended break test. The initiation of core dryout in RUN 914 was almost the same as that in RUN 926. Duration of core dryourt was, however, longer in RUN 914 because of later actuation of ECCSs. PCT in RUN 914 was 67 K higher than that in RUN 926. (author)

  17. A novel approach to evaluate and compare computational snow avalanche simulation

    Directory of Open Access Journals (Sweden)

    J.-T. Fischer

    2013-06-01

    Full Text Available An innovative approach for the analysis and interpretation of snow avalanche simulation in three dimensional terrain is presented. Snow avalanche simulation software is used as a supporting tool in hazard mapping. When performing a high number of simulation runs the user is confronted with a considerable amount of simulation results. The objective of this work is to establish an objective, model independent framework to evaluate and compare results of different simulation approaches with respect to indicators of practical relevance, providing an answer to the important questions: how far and how destructive does an avalanche move down slope. For this purpose the Automated Indicator based Model Evaluation and Comparison (AIMEC method is introduced. It operates on a coordinate system which follows a given avalanche path. A multitude of simulation runs is performed with the snow avalanche simulation software SamosAT (Snow Avalanche MOdelling and Simulation – Advanced Technology. The variability of pressure-based run out and avalanche destructiveness along the path is investigated for multiple simulation runs, varying release volume and model parameters. With this, results of deterministic simulation software are processed and analysed by means of statistical methods. Uncertainties originating from varying input conditions, model parameters or the different model implementations are assessed. The results show that AIMEC contributes to the interpretation of avalanche simulations with a broad applicability in model evaluation, comparison as well as examination of scenario variations.

  18. TransCom model simulations of hourly atmospheric CO2: Experimental overview and diurnal cycle results for 2002

    NARCIS (Netherlands)

    Law, R. M.; Peters, W.; Roedenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S. -J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.

    2008-01-01

    [1] A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model

  19. Simulations and cosmological inference: A statistical model for power spectra means and covariances

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles

    2008-01-01

    We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.

  20. Intraseasonal Variability of the Indian Monsoon as Simulated by a Global Model

    Science.gov (United States)

    Joshi, Sneh; Kar, S. C.

    2018-01-01

    This study uses the global forecast system (GFS) model at T126 horizontal resolution to carry out seasonal simulations with prescribed sea-surface temperatures. Main objectives of the study are to evaluate the simulated Indian monsoon variability in intraseasonal timescales. The GFS model has been integrated for 29 monsoon seasons with 15 member ensembles forced with observed sea-surface temperatures (SSTs) and additional 16-member ensemble runs have been carried out using climatological SSTs. Northward propagation of intraseasonal rainfall anomalies over the Indian region from the model simulations has been examined. It is found that the model is unable to simulate the observed moisture pattern when the active zone of convection is over central India. However, the model simulates the observed pattern of specific humidity during the life cycle of northward propagation on day - 10 and day + 10 of maximum convection over central India. The space-time spectral analysis of the simulated equatorial waves shows that the ensemble members have varying amount of power in each band of wavenumbers and frequencies. However, variations among ensemble members are more in the antisymmetric component of westward moving waves and maximum difference in power is seen in the 8-20 day mode among ensemble members.

  1. CMS Computing Operations During Run1

    CERN Document Server

    Gutsche, Oliver

    2013-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  2. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  3. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  4. Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.

    Science.gov (United States)

    Zhang, Xuyang; Goh, Kean S

    2015-11-01

    Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. Analysis of precipitation teleconnections in CMIP models as a measure of model fidelity in simulating precipitation

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J.; Meyerson, J.

    2011-12-01

    The accurate representation of precipitation is a recurring issue in global climate models, especially in the tropics. Poor skill in modeling the variability and climate teleconnections associated with El Niño/Southern Oscillation (ENSO) also persisted in the latest Climate Model Intercomparison Project (CMIP) campaigns. Observed ENSO precipitation teleconnections provide a standard by which we can judge a given model's ability to reproduce precipitation and dynamic feedback processes originating in the tropical Pacific. Using CMIP3 Atmospheric Model Intercomparison Project (AMIP) runs as a baseline, we compare precipitation teleconnections between models and observations, and we evaluate these results against available CMIP5 historical and AMIP runs. Using AMIP simulations restricts evaluation to the atmospheric response, as sea surface temperatures (SSTs) in AMIP are prescribed by observations. We use a rank correlation between ENSO SST indices and precipitation to define teleconnections, since this method is robust to outliers and appropriate for non-Gaussian data. Spatial correlations of the modeled and observed teleconnections are then evaluated. We look at these correlations in regions of strong precipitation teleconnections, including equatorial S. America, the "horseshoe" region in the western tropical Pacific, and southern N. America. For each region and season, we create a "normalized projection" of a given model's teleconnection pattern onto that of the observations, a metric that assesses the quality of regional pattern simulations while rewarding signals of correct sign over the region. Comparing this to an area-averaged (i.e., more generous) metric suggests models do better when restrictions on exact spatial dependence are loosened and conservation constraints apply. Model fidelity in regional measures remains far from perfect, suggesting intrinsic issues with the models' regional sensitivities in moist processes.

  6. Global ice volume variations through the last glacial cycle simulated by a 3-D ice-dynamical model

    NARCIS (Netherlands)

    Bintanja, R.; Wal, R.S.W. van de; Oerlemans, J.

    2002-01-01

    A coupled ice sheet—ice shelf—bedrock model was run at 20km resolution to simulate the evolution of global ice cover during the last glacial cycle. The mass balance model uses monthly mean temperature and precipitation as input and incorporates the albedo—mass balance feedback. The model is forced

  7. Reduced Gasoline Surrogate (Toluene/n-Heptane/iso-Octane) Chemical Kinetic Model for Compression Ignition Simulations

    KAUST Repository

    Sarathy, Mani; Atef, Nour; Alfazazi, Adamu; Badra, Jihad; Zhang, Yu; Tzanetakis, Tom; Pei, Yuanjiang

    2018-01-01

    Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.

  8. Reduced Gasoline Surrogate (Toluene/n-Heptane/iso-Octane) Chemical Kinetic Model for Compression Ignition Simulations

    KAUST Repository

    Sarathy, Mani

    2018-04-03

    Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.

  9. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  10. Interface between Core/TH Model and Simulator for OPR1000

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Do Hyun; Lee, Myeong Soo; Hong, Jin Hyuk; Lee, Seung Ho; Suh, Jung Kwan [KEPRI, Daejeon (Korea, Republic of)

    2009-05-15

    OPR1000 simulator for ShinKori-Unit 1, which will be operated at 2815MWt of thermal core power, is being developed while the ShinKori-Unit 1 and 2 is being built. OPR1000 simulator adopted the RELAP5 R/T code, which is the adaptation of RELAP5 and NESTLE codes to run in real-time mode with graphical visualization, to model Nuclear Steam Supply System (NSSS) Thermal-Hydraulics (TH) and Reactor Core. The RELAP5 is an advanced, best estimate, reactor TH simulation code developed at Idaho National Engineering and Environment Laboratory(INEEL) and the NESTLE is a true two-energy group neutronics code that computes the neutron flux and power for each node at every time step. As a simulator environment, the 3KEYMASTER{sup TM}, a commercial environment tool of WSC is used.

  11. Interface between Core/TH Model and Simulator for OPR1000

    International Nuclear Information System (INIS)

    Hwang, Do Hyun; Lee, Myeong Soo; Hong, Jin Hyuk; Lee, Seung Ho; Suh, Jung Kwan

    2009-01-01

    OPR1000 simulator for ShinKori-Unit 1, which will be operated at 2815MWt of thermal core power, is being developed while the ShinKori-Unit 1 and 2 is being built. OPR1000 simulator adopted the RELAP5 R/T code, which is the adaptation of RELAP5 and NESTLE codes to run in real-time mode with graphical visualization, to model Nuclear Steam Supply System (NSSS) Thermal-Hydraulics (TH) and Reactor Core. The RELAP5 is an advanced, best estimate, reactor TH simulation code developed at Idaho National Engineering and Environment Laboratory(INEEL) and the NESTLE is a true two-energy group neutronics code that computes the neutron flux and power for each node at every time step. As a simulator environment, the 3KEYMASTER TM , a commercial environment tool of WSC is used

  12. ATLAS Run 1 Pythia8 tunes

    CERN Document Server

    The ATLAS collaboration

    2014-01-01

    We present tunes of the Pythia8 Monte~Carlo event generator's parton shower and multiple parton interaction parameters to a range of data observables from ATLAS Run 1. Four new tunes have been constructed, corresponding to the four leading-order parton density functions, CTEQ6L1, MSTW2008LO, NNPDF23LO, and HERAPDF15LO, each simultaneously tuning ten generator parameters. A set of systematic variations is provided for the NNPDF tune, based on the eigentune method. These tunes improve the modeling of observables that can be described by leading-order + parton shower simulation, and are primarily intended for use in situations where next-to-leading-order and/or multileg parton-showered simulations are unavailable or impractical.

  13. The running-mass inflation model and WMAP

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...

  14. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    Science.gov (United States)

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  15. Simulating Shopper Behavior using Fuzzy Logic in Shopping Center Simulation

    Directory of Open Access Journals (Sweden)

    Jason Christian

    2016-12-01

    Full Text Available To simulate real-world phenomena, a computer tool can be used to run a simulation and provide a detailed report. By using a computer-aided simulation tool, we can retrieve information relevant to the simulated subject in a relatively short time. This study is an extended and complete version of an initial research done by Christian and Hansun and presents a prototype of a multi-agent shopping center simulation tool along with a fuzzy logic algorithm implemented in the system. Shopping centers and all their components are represented in a simulated 3D environment. The simulation tool was created using the Unity3D engine to build the 3D environment and to run the simulation. To model and simulate the behavior of agents inside the simulation, a fuzzy logic algorithm that uses the agents’ basic knowledge as input was built to determine the agents’ behavior inside the system and to simulate human behaviors as realistically as possible.

  16. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    Science.gov (United States)

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  17. Simulation of Groundwater Flow, Denpasar-Tabanan Groundwater Basin, Bali Province

    Directory of Open Access Journals (Sweden)

    Heryadi Tirtomihardjo

    2014-06-01

    Full Text Available DOI: 10.17014/ijog.v6i3.123Due to the complex structure of the aquifer systems and its hydrogeological units related with the space in which groundwater occurs, groundwater flows were calculated in three-dimensional method (3D Calculation. The geometrical descritization and iteration procedures were based on an integrated finite difference method. In this paper, all figures and graphs represent the results of the calibrated model. Hence, the model results were simulated by using the actual input data which were calibrated during the simulation runs. Groundwater flow simulation of the model area of the Denpasar-Tabanan Groundwater Basin (Denpasar-Tabanan GB comprises steady state run, transient runs using groundwater abstraction in the period of 1989 (Qabs-1989 and period of 2009 (Qabs-2009, and prognosis run as well. Simulation results show, in general, the differences of calculated groundwater heads and observed groundwater heads at steady and transient states (Qabs-1989 and Qabs-2009 are relatively small. So, the groundwater heads situation simulated by the prognosis run (scenario Qabs-2012 are considerably valid and can properly be used for controlling the plan of groundwater utilization in Denpasar-Tabanan GB.

  18. ROSA-III 50 % break integral test RUN 916

    International Nuclear Information System (INIS)

    Yonomoto, Taisuke; Tasaka, Kanji; Koizumi, Yasuo; Anoda, Yoshinari; Kumamaru, Hiroshige; Nakamura, Hideo; Suzuki, Mitsuhiro; Murata, Hideo

    1985-08-01

    This report presents the experimental data of RUN 916 conducted at the ROSA-III test facility. The facility is a volumetrically scaled (1/424) simulator for a BWR/6 with the electrically heated core, the break simulator and the scaled ECCS(emergency core cooling system). RUN 916 was a 50 % split break test at the recirculation pump suction line with an assumption of HPCS diegel generator failure and conducted as one of the break area parameter tests. A peak cladding temperature (PCT) of 917 K was reached at 190 s after the break during the reflooding phase. Whole core was completely quenched by ECCS, and the effectiveness of ECCS was confermed. The primary test results of RUN 916 are compared in this report with those of RUN 926, which was a 200 % double-ended break test. The initiation of core dryout in RUN 916 was later than that in RUN 926 because of the smaller discharge flow rate. Duration of core dryourt was, however, longer in RUN 916 because of later actuation of ECCSs. PCT in RUN 916 was 133 K higher than that in RUN 926. (author)

  19. Stretching Your Energetic Budget: How Tendon Compliance Affects the Metabolic Cost of Running.

    Directory of Open Access Journals (Sweden)

    Thomas K Uchida

    Full Text Available Muscles attach to bones via tendons that stretch and recoil, affecting muscle force generation and metabolic energy consumption. In this study, we investigated the effect of tendon compliance on the metabolic cost of running using a full-body musculoskeletal model with a detailed model of muscle energetics. We performed muscle-driven simulations of running at 2-5 m/s with tendon force-strain curves that produced between 1 and 10% strain when the muscles were developing maximum isometric force. We computed the average metabolic power consumed by each muscle when running at each speed and with each tendon compliance. Average whole-body metabolic power consumption increased as running speed increased, regardless of tendon compliance, and was lowest at each speed when tendon strain reached 2-3% as muscles were developing maximum isometric force. When running at 2 m/s, the soleus muscle consumed less metabolic power at high tendon compliance because the strain of the tendon allowed the muscle fibers to operate nearly isometrically during stance. In contrast, the medial and lateral gastrocnemii consumed less metabolic power at low tendon compliance because less compliant tendons allowed the muscle fibers to operate closer to their optimal lengths during stance. The software and simulations used in this study are freely available at simtk.org and enable examination of muscle energetics with unprecedented detail.

  20. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  1. Geologic simulation model for a hypothetical site in the Columbia Plateau

    International Nuclear Information System (INIS)

    Petrie, G.M.; Zellmer, J.T.; Lindberg, J.W.; Foley, M.G.

    1981-04-01

    This report describes the structure and operation of the Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Geologic Simulation Model, a computer simulation model of the geology and hydrology of an area of the Columbia Plateau, Washington. The model is used to study the long-term suitability of the Columbia Plateau Basalts for the storage of nuclear waste in a mined repository. It is also a starting point for analyses of such repositories in other geologic settings. The Geologic Simulation Model will aid in formulating design disruptive sequences (i.e. those to be used for more detailed hydrologic, transport, and dose analyses) from the spectrum of hypothetical geological and hydrological developments that could result in transport of radionuclides out of a repository. Quantitative and auditable execution of this task, however, is impossible without computer simulation. The computer simulation model aids the geoscientist by generating the wide spectrum of possible future evolutionary paths of the areal geology and hydrology, identifying those that may affect the repository integrity. This allows the geoscientist to focus on potentially disruptive processes, or series of events. Eleven separate submodels are used in the simulation portion of the model: Climate, Continental Glaciation, Deformation, Geomorphic Events, Hydrology, Magmatic Events, Meteorite Impact, Sea-Level Fluctuations, Shaft-Seal Failure, Sub-Basalt Basement Faulting, and Undetected Features. Because of the modular construction of the model, each submodel can easily be replaced with an updated or modified version as new information or developments in the state of the art become available. The model simulates the geologic and hydrologic systems of a hypothetical repository site and region for a million years following repository decommissioning. The Geologic Simulation Model operates in both single-run and Monte Carlo modes

  2. Research on virtualization-based cloud simulation running environment dynamic building technology%基于虚拟化技术的云仿真运行环境动态构建技术

    Institute of Scientific and Technical Information of China (English)

    张雅彬; 李伯虎; 柴旭东; 杨晨

    2012-01-01

    为使得云仿真平台能够支持仿真用户快速、高效、灵活地获得个性化仿真服务,基于虚拟化技术研究了云仿真运行环境动态构建技术,设计了基于虚拟化技术的云仿真运行环境动态构建模型,研究了面向多用户的、以仿真模型需求为依据的云仿真运行环境动态构建的三层映射算法.最后通过一个应用示例说明了基于虚拟化技术的云仿真运行环境动态构建技术的可行性和有效性.%In order to enable a cloud simulation platform (CSP) to support users obtaining individual simulation services quickly, effectively and neatly, the cloud simulation running environment dynamic building technology is researched based on virtualization technology. A virtualization-based cloud simulation running environment dynamic building model is designed and the multi-user oriented three-layer algorithm built dynamically by the cloud simulation running environment is researched according to the demand of simulation resources. Finally, an example is given to show the feasibility and effectiveness.

  3. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  4. Running of radiative neutrino masses: the scotogenic model — revisited

    Energy Technology Data Exchange (ETDEWEB)

    Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)

    2015-11-23

    A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.

  5. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. File Specification for the 7-km GEOS-5 Nature Run, Ganymed Release Non-Hydrostatic 7-km Global Mesoscale Simulation

    Science.gov (United States)

    da Silva, Arlindo M.; Putman, William; Nattala, J.

    2014-01-01

    details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.

  7. Using ProModel as a simulation tools to assist plant layout design and planning: Case study plastic packaging factory

    Directory of Open Access Journals (Sweden)

    Pochamarn Tearwattanarattikal

    2008-01-01

    Full Text Available This study is about the application of a Simulation Model to assist decision making on expanding capacity and plant layout design and planning. The plant layout design concept is performed first to create the physical layouts then the simulation model used to test the capability of plant to meet various demand forecast scena. The study employed ProModel package as a tool, using the model to compare the performances in term of % utilization, characteristics of WIP and ability to meet due date. The verification and validation stages were perform before running the scenarios. The model runs daily production and then the capacity constraint resources defined by % utilization. The expanding capacity policy can be extra shift-working hours or increasing the number of machines. After expanding capacity solutions are found, the physical layout is selected based on the criterion of space available for WIP and easy flow of material.

  8. Split-phase motor running as capacitor starts motor and as capacitor run motor

    Directory of Open Access Journals (Sweden)

    Yahaya Asizehi ENESI

    2016-07-01

    Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.

  9. Option Valuation with Long-run and Short-run Volatility Components

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Ornthanalai, Chayawat

    This paper presents a new model for the valuation of European options, in which the volatility of returns consists of two components. One of these components is a long-run component, and it can be modeled as fully persistent. The other component is short-run and has a zero mean. Our model can...... be viewed as an affine version of Engle and Lee (1999), allowing for easy valuation of European options. The model substantially outperforms a benchmark single-component volatility model that is well-established in the literature, and it fits options better than a model that combines conditional...... model long-maturity and short-maturity options....

  10. Six Weeks Habituation of Simulated Barefoot Running Induces Neuromuscular Adaptations and Changes in Foot Strike Patterns in Female Runners

    Science.gov (United States)

    Khowailed, Iman Akef; Petrofsky, Jerrold; Lohman, Everett; Daher, Noha

    2015-01-01

    Background The aim of this study was to examine the effects of a 6-week training program of simulated barefoot running (SBR) on running kinetics in habitually shod (wearing shoes) female recreational runners. Material/Methods Twelve female runners age 25.7±3.4 years gradually increased running distance in Vibram FiveFingers minimal shoes over a 6-week period. The kinetic analysis of treadmill running at 10 Km/h was performed pre- and post-intervention in shod running, non-habituated SBR, and habituated SBR conditions. Spatiotemporal parameters, ground reaction force components, and electromyography (EMG) were measured in all conditions. Results Post-intervention data indicated a significant decrease across time in the habituation SBR for EMG activity of the tibialis anterior (TA) in the pre-activation and absorptive phase of running (Prunning, unhabituated SBR, and habituated SBR. Six weeks of SBR was associated with a significant decrease in the loading rates and impact forces. Additionally, SBR significantly decrease the stride length, step duration, and flight time, and stride frequency was significantly higher compared to shod running. Conclusions The findings of this study indicate that changes in motor patterns in previously habitually shod runners are possible and can be accomplished within 6 weeks. Non-habituation SBR did not show a significant neuromuscular adaptation in the EMG activity of TA and GAS as manifested after 6 weeks of habituated SBR. PMID:26166443

  11. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  12. Changes in running pattern due to fatigue and cognitive load in orienteering.

    Science.gov (United States)

    Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit

    2010-01-01

    The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.

  13. Grounded running in quails: simulations indicate benefits of observed fixed aperture angle between legs before touch-down.

    Science.gov (United States)

    Andrada, Emanuel; Rode, Christian; Blickhan, Reinhard

    2013-10-21

    Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations. © 2013 Elsevier Ltd. All rights reserved.

  14. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  15. 40 CFR 86.134-96 - Running loss test.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...

  16. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  17. Implementation of the frequency dependent line model in a real-time power system simulator

    Directory of Open Access Journals (Sweden)

    Reynaldo Iracheta-Cortez

    2017-09-01

    Full Text Available In this paper is described the implementation of the frequency-dependent line model (FD-Line in a real-time digital power system simulator. The main goal with such development is to describe a general procedure to incorporate new realistic models of power system components in modern real-time simulators based on the Electromagnetic Transients Program (EMTP. In this procedure are described, firstly, the steps to obtain the time domain solution of the differential equations that models the electromagnetic behavior in multi-phase transmission lines with frequency dependent parameters. After, the algorithmic solution of the FD-Line model is implemented in Simulink environment, through an S-function programmed in C language, for running off-line simulations of electromagnetic transients. This implementation allows the free assembling of the FD Line model with any element of the Power System Blockset library and also, it can be used to build any network topology. The main advantage of having a power network built in Simulink is that can be executed in real-time by means of the commercial eMEGAsim simulator. Finally, several simulation cases are presented to validate the accuracy and the real-time performance of the FD-Line model.

  18. On lumped models for thermodynamic properties of simulated annealing problems

    International Nuclear Information System (INIS)

    Andresen, B.; Pedersen, J.M.; Salamon, P.; Hoffmann, K.H.; Mosegaard, K.; Nulton, J.

    1987-01-01

    The paper describes a new method for the estimation of thermodynamic properties for simulated annealing problems using data obtained during a simulated annealing run. The method works by estimating energy-to-energy transition probabilities and is well adapted to simulations such as simulated annealing, in which the system is never in equilibrium. (orig.)

  19. A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model

    Science.gov (United States)

    Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang

    2017-05-01

    There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.

  20. Tsunami Simulators in Physical Modelling Laboratories - From Concept to Proven Technique

    Science.gov (United States)

    Allsop, W.; Chandler, I.; Rossetto, T.; McGovern, D.; Petrone, C.; Robinson, D.

    2016-12-01

    Before 2004, there was little public awareness around Indian Ocean coasts of the potential size and effects of tsunami. Even in 2011, the scale and extent of devastation by the Japan East Coast Tsunami was unexpected. There were very few engineering tools to assess onshore impacts of tsunami, so no agreement on robust methods to predict forces on coastal defences, buildings or related infrastructure. Modelling generally used substantial simplifications of either solitary waves (far too short durations) or dam break (unrealistic and/or uncontrolled wave forms).This presentation will describe research from EPI-centre, HYDRALAB IV, URBANWAVES and CRUST projects over the last 10 years that have developed and refined pneumatic Tsunami Simulators for the hydraulic laboratory. These unique devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example defences. They have reproduced full-duration tsunamis including the Mercator trace from 2004 at 1:50 scale. Engineering scale models subjected to those tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences and pressures / forces on buildings. This presentation will describe how these pneumatic Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facility within which they operate, and will highlight research results from the three generations of Tsunami Simulator. Of direct relevance to engineers and modellers will be measurements of wave run-up levels and comparison with theoretical predictions. Recent measurements of forces on individual buildings have been generalized by separate experiments on buildings (up to 4 rows) which show that the greatest forces can act on the landward (not seaward) buildings. Continuing research in the 70m long 4m wide Fast Flow Facility on tsunami defence structures have also measured forces on buildings in the lee of a failed defence wall.

  1. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  2. Simulation research about China Experimental Fast Reactor steam turboset based on Flowmaster platform

    International Nuclear Information System (INIS)

    Yan Hao; Tian Zhaofei

    2014-01-01

    In the third loop of China Experimental Fast Reactor (CEFR), steam turboset take an important role in converting heat energy into electric energy. However, turbo sets have not been operated on the condition of more than 40%P_0 (P_0 is full power) since they were installed. Thus it is necessary to make an analogue simulation. Based on the real models of turbo sets in CEFR, simulation models were created with the help of Flowmaster platform. By using such simulation models, a steady state result in full power circumstance was got, which is in accordance with design parameters. Meanwhile, a transient state simulation with operating condition ranging from full power to 40%P_0 was accomplished and a result which verifies part of performance and running conditions of turbo sets was got. The result of analogue simulation shows that based on Flowmaster platform, the running condition of simulation models can comply with design requirement, and offer reference values to the actual running. Such simulation models can also offer reference values to other simulation models in the third loop of CEFR. (authors)

  3. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    Science.gov (United States)

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  4. Modeling and simulation of offshore wind farm O&M processes

    Energy Technology Data Exchange (ETDEWEB)

    Joschko, Philip, E-mail: joschko@informatik.uni-hamburg.de [University of Hamburg, Dept. of Informatics, Vogt-Kölln-Straße 30, 22527 Hamburg (Germany); Widok, Andi H., E-mail: a.widok@htw-berlin.de [University of Hamburg, Dept. of Informatics, Vogt-Kölln-Straße 30, 22527 Hamburg (Germany); Appel, Susanne, E-mail: susanne.appel@hs-bremen.de [HSB Bremen, Institute for Environment and Biotechnology, Neustadtswall 30, 28199 Bremen (Germany); Greiner, Saskia, E-mail: saskia.greiner@hs-bremen.de [HSB Bremen, Institute for Environment and Biotechnology, Neustadtswall 30, 28199 Bremen (Germany); Albers, Henning, E-mail: henning.albers@hs-bremen.de [HSB Bremen, Institute for Environment and Biotechnology, Neustadtswall 30, 28199 Bremen (Germany); Page, Bernd, E-mail: page@informatik.uni-hamburg.de [University of Hamburg, Dept. of Informatics, Vogt-Kölln-Straße 30, 22527 Hamburg (Germany)

    2015-04-15

    This paper describes a holistic approach to operation and maintenance (O&M) processes in the domain of offshore wind farm power generation. The acquisition and process visualization is followed by a risk analysis of all relevant processes. Hereafter, a tool was designed, which is able to model the defined processes in a BPMN 2.0 notation, as well as connect and simulate them. Furthermore, the notation was enriched with new elements, representing other relevant factors that were, to date, only displayable with much higher effort. In that regard a variety of more complex situations were integrated, such as for example new process interactions depending on different weather influences, in which case a stochastic weather generator was combined with the business simulation or other wind farm aspects important to the smooth running of the offshore wind farms. In addition, the choices for different methodologies, such as the simulation framework or the business process notation will be presented and elaborated depending on the impact they had on the development of the approach and the software solution. - Highlights: • Analysis of operation and maintenance processes of offshore wind farms • Process modeling with BPMN 2.0 • Domain-specific simulation tool.

  5. Modeling and simulation of offshore wind farm O&M processes

    International Nuclear Information System (INIS)

    Joschko, Philip; Widok, Andi H.; Appel, Susanne; Greiner, Saskia; Albers, Henning; Page, Bernd

    2015-01-01

    This paper describes a holistic approach to operation and maintenance (O&M) processes in the domain of offshore wind farm power generation. The acquisition and process visualization is followed by a risk analysis of all relevant processes. Hereafter, a tool was designed, which is able to model the defined processes in a BPMN 2.0 notation, as well as connect and simulate them. Furthermore, the notation was enriched with new elements, representing other relevant factors that were, to date, only displayable with much higher effort. In that regard a variety of more complex situations were integrated, such as for example new process interactions depending on different weather influences, in which case a stochastic weather generator was combined with the business simulation or other wind farm aspects important to the smooth running of the offshore wind farms. In addition, the choices for different methodologies, such as the simulation framework or the business process notation will be presented and elaborated depending on the impact they had on the development of the approach and the software solution. - Highlights: • Analysis of operation and maintenance processes of offshore wind farms • Process modeling with BPMN 2.0 • Domain-specific simulation tool

  6. Modelling cadmium contamination in paddy soils under long-term remediation measures: Model development and stochastic simulations.

    Science.gov (United States)

    Peng, Chi; Wang, Meie; Chen, Weiping

    2016-09-01

    A pollutant accumulation model (PAM) based on the mass balance theory was developed to simulate long-term changes of heavy metal concentrations in soil. When combined with Monte Carlo simulation, the model can predict the probability distributions of heavy metals in a soil-water-plant system with fluctuating environmental parameters and inputs from multiple pathways. The model was used for evaluating different remediation measures to deal with Cd contamination of paddy soils in Youxian county (Hunan province), China, under five scenarios, namely the default scenario (A), not returning paddy straw to the soil (B), reducing the deposition of Cd (C), liming (D), and integrating several remediation measures (E). The model predicted that the Cd contents of soil can lowered significantly by (B) and those of the plants by (D). However, in the long run, (D) will increase soil Cd. The concentrations of Cd in both soils and rice grains can be effectively reduced by (E), although it will take decades of effort. The history of Cd pollution and the major causes of Cd accumulation in soil were studied by means of sensitivity analysis and retrospective simulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Short-Run and Long-Run Elasticities of Diesel Demand in Korea

    Directory of Open Access Journals (Sweden)

    Seung-Hoon Yoo

    2012-11-01

    Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.

  8. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  9. Simulation-optimization model for production planning in the blood supply chain.

    Science.gov (United States)

    Osorio, Andres F; Brailsford, Sally C; Smith, Honora K; Forero-Matiz, Sonia P; Camacho-Rodríguez, Bernardo A

    2017-12-01

    Production planning in the blood supply chain is a challenging task. Many complex factors such as uncertain supply and demand, blood group proportions, shelf life constraints and different collection and production methods have to be taken into account, and thus advanced methodologies are required for decision making. This paper presents an integrated simulation-optimization model to support both strategic and operational decisions in production planning. Discrete-event simulation is used to represent the flows through the supply chain, incorporating collection, production, storing and distribution. On the other hand, an integer linear optimization model running over a rolling planning horizon is used to support daily decisions, such as the required number of donors, collection methods and production planning. This approach is evaluated using real data from a blood center in Colombia. The results show that, using the proposed model, key indicators such as shortages, outdated units, donors required and cost are improved.

  10. Simulation of LSI-11/PDP-11 series minicomputers

    International Nuclear Information System (INIS)

    Myers, J.R.; Cottrell, R.L.A.; Bricaud, B.M.

    1979-04-01

    A functional simulation of te PDP-11 series minicomputers was implemented to run either interactively or as a batch job on an IBM 370 computer. The simulator operates in two modes, the supervisor mode and the run mode. In the supervisor mode, the simulator implements a command language, which allows users to examine and change the contents of memory or other addressable registers in the simulated machine. Optionally, an instruction trace may also be turned on or off. In the run mode, the simulation of the instruction set is tested by successfully running DEC's MAINDEC basic instruction test on the simulated machine. The interrupt structure is modeled. The simulation is open ended in the sense that users may define new peripheral devices, by including their own FORTRAN callable subroutines for each simulated device. Currently the following devices are supported: floppy disks, a console terminal, a card reader, a card punch, a line printer, and a communications multiplexer (DH11). With these devices DEC's RT-11 versions 2C and 3B have been successfully run on the simulator. At SLAC this simulator is proving useful in debugging software for one-of-a-kind hardware configurations, such as communications front end processors, that are not readily accessible for stand alone testing. 5 figures

  11. The effective Standard Model after LHC Run I

    International Nuclear Information System (INIS)

    Ellis, John; Sanz, Verónica; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard S,T formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run 1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  12. The Effective Standard Model after LHC Run I

    CERN Document Server

    Ellis, John; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard $S,T$ formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run~1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  13. New Constraints on the running-mass inflation model

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro

    2002-01-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...

  14. Toward a Run-to-Run Adaptive Artificial Pancreas: In Silico Results.

    Science.gov (United States)

    Toffanin, Chiara; Visentin, Roberto; Messori, Mirko; Palma, Federico Di; Magni, Lalo; Cobelli, Claudio

    2018-03-01

    Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. After about two months, using the R2R approach with a scenario characterized by a random 30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo.

  15. Hybrid simulation models for data-intensive systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00473067

    Data-intensive systems are used to access and store massive amounts of data by combining the storage resources of multiple data-centers, usually deployed all over the world, in one system. This enables users to utilize these massive storage capabilities in a simple and efficient way. However, with the growth of these systems it becomes a hard problem to estimate the effects of modifications to the system, such as data placement algorithms or hardware upgrades, and to validate these changes for potential side effects. This thesis addresses the modeling of operational data-intensive systems and presents a novel simulation model which estimates the performance of system operations. The running example used throughout this thesis is the data-intensive system Rucio, which is used as the data man- agement system of the ATLAS experiment at CERN’s Large Hadron Collider. Existing system models in literature are not applicable to data-intensive workflows, as they only consider computational workflows or make assumpti...

  16. Design process for applying the nonlocal thermal transport iSNB model to a Polar-Drive ICF simulation

    Science.gov (United States)

    Cao, Duc; Moses, Gregory; Delettrez, Jacques; Collins, Timothy

    2014-10-01

    A design process is presented for the nonlocal thermal transport iSNB (implicit Schurtz, Nicolai, and Busquet) model to provide reliable nonlocal thermal transport in polar-drive ICF simulations. Results from the iSNB model are known to be sensitive to changes in the SNB ``mean free path'' formula, and the latter's original form required modification to obtain realistic preheat levels. In the presented design process, SNB mean free paths are first modified until the model can match temperatures from Goncharov's thermal transport model in 1D temperature relaxation simulations. Afterwards the same mean free paths are tested in a 1D polar-drive surrogate simulation to match adiabats from Goncharov's model. After passing the two previous steps, the model can then be run in a full 2D polar-drive simulation. This research is supported by the University of Rochester Laboratory for Laser Energetics.

  17. San Onofre 2/3 simulator: The move from Unix to Windows

    International Nuclear Information System (INIS)

    Paquette, C.; Desouky, C.; Gagnon, V.

    2006-01-01

    CAE has been developing nuclear power plant (NPP) simulators for over 30 years for customers around the world. While numerous operating systems are used today for simulators, many of the existing simulators were developed to run on workstation-type computers using a variant of the Unix operating system. Today, thanks to the advances in the power and capabilities of Personal Computers (PC's), and because most simulators will eventually need to be upgraded, more and more of these RISC processor-based simulators will be converted to PC-based platforms running either the Windows or Linux operating systems. CAE's multi-platform simulation environment runs on the UNIX Linux and Windows operating systems, enabling simulators to be 'open' and highly interoperable systems using industry-standard software components and methods. The result is simulators that are easier to maintain and modify as reference plants evolve. In early January 2003, CAE set out to upgrade Southern California Edison's San Onofre Unit 2/3 UNIX-based simulator with its latest integrated simulation environment. This environment includes CAE's instructor station Isis, the latest ROSE modeling and runtime tool, as well as the deployment of a new reactor kinetics model (COMET) and new nuclear steam supply system (ANTHEM2000). The chosen simulation platform is PC-based and runs the Windows XP operating system. The main features and achievements of the San Onofre 2/3 Simulator's modernization from RISC/Unix to Intel/Windows XP, running CAE's current simulation environment, is the subject of this paper. (author)

  18. Software development infrastructure for the HYBRID modeling and simulation project

    International Nuclear Information System (INIS)

    Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk; Rabiti, Cristian; Greenwood, M. Scott

    2016-01-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  19. Software development infrastructure for the HYBRID modeling and simulation project

    Energy Technology Data Exchange (ETDEWEB)

    Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  20. CAISSON: Interconnect Network Simulator

    Science.gov (United States)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  1. Model for radionuclide transport in running waters

    International Nuclear Information System (INIS)

    Jonsson, Karin; Elert, Mark

    2005-11-01

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  2. Model for radionuclide transport in running waters

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)

    2005-11-15

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  3. Tsunami Simulators in Physical Modelling - Concept to Practical Solutions

    Science.gov (United States)

    Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David

    2017-04-01

    Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami

  4. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  5. A PC-based discrete event simulation model of the civilian radioactive waste management system

    International Nuclear Information System (INIS)

    Airth, G.L.; Joy, D.S.; Nehls, J.W.

    1992-01-01

    This paper discusses a System Simulation Model which has been developed for the Department of Energy to simulate the movement of individual waste packages (spent fuel assemblies and fuel containers) through the Civilian Radioactive Waste Management System (CRWMS). A discrete event simulation language, GPSS/PC, which runs on an IBM/PC and operates under DOS 5.0, mathematically represents the movement and processing of radioactive waste packages through the CRWMS and the interaction of these packages with the equipment in the various facilities. The major features of the System Simulation Model are: the ability to reference characteristics of the different types of radioactive waste (age, burnup, etc.) in order to make operational and/or system design decisions, the ability to place stochastic variations on operational parameters such as processing time and equipment outages, and the ability to include a rigorous simulation of the transportation system. Output from the model includes the numbers, types, and characteristics of waste packages at selected points in the CRWMS and the extent to which various resources will be utilized in order to transport, process, and emplace the waste

  6. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

    Science.gov (United States)

    Sperber, K. R.; Palmer, T. N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall

  7. A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2013-05-18

    Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.

  8. New constraints on the running-mass inflation model

    International Nuclear Information System (INIS)

    Covi, L.; Lyth, D.H.; Melchiorri, A.

    2002-10-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)

  9. The ATLAS Tau Trigger Performance during LHC Run 1 and Prospects for Run 2

    CERN Document Server

    Mitani, T; The ATLAS collaboration

    2016-01-01

    The ATLAS tau trigger is designed to select hadronic decays of the tau leptons. Tau lepton plays an important role in Standard Model (SM) physics, such as in Higgs boson decays. Tau lepton is also important in beyond the SM (BSM) scenarios, such as supersymmetry and exotic particles, as they are often produced preferentially in these models. During the 2010-2012 LHC run (Run1), the tau trigger was accomplished successfully, which leads several rewarding results such as evidence for $H\\rightarrow \\tau\\tau$. From the 2015 LHC run (Run2), LHC will be upgraded and overlapping interactions per bunch crossing (pile-up) are expected to increase by a factor two. It will be challenging to control trigger rates while keeping interesting physics events. This paper summarized the tau trigger performance in Run1 and its prospects for Run2.

  10. Models and simulations for the Danish cell project. Running PowerFactory with OPC and cell controller

    Energy Technology Data Exchange (ETDEWEB)

    Martensen, Nis; Troester, Eckehard [energynautics GmbH, Langen (Germany); Lund, Per [Energinet.dk, Fredericia (Denmark); Holland, Rod [Spirae Inc., Fort Collins, CO (United States)

    2009-07-01

    In emergency situations, the Cell Controller disconnects a distribution grid from the high-voltage network and controls the cell's island operation. The controller thus activates the existing local generation plants to improve the security of supply. The Cell Controller can operate the Cell as a Virtual Power Plant during normal grid-connected operation, thereby implementing an exemplary Smart Grid. Modeling and simulation work is presented. (orig.)

  11. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    Science.gov (United States)

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  12. Evaluation of countermeasures for red light running by traffic simulator-based surrogate safety measures.

    Science.gov (United States)

    Lee, Changju; So, Jaehyun Jason; Ma, Jiaqi

    2018-01-02

    The conflicts among motorists entering a signalized intersection with the red light indication have become a national safety issue. Because of its sensitivity, efforts have been made to investigate the possible causes and effectiveness of countermeasures using comparison sites and/or before-and-after studies. Nevertheless, these approaches are ineffective when comparison sites cannot be found, or crash data sets are not readily available or not reliable for statistical analysis. Considering the random nature of red light running (RLR) crashes, an inventive approach regardless of data availability is necessary to evaluate the effectiveness of each countermeasure face to face. The aims of this research are to (1) review erstwhile literature related to red light running and traffic safety models; (2) propose a practical methodology for evaluation of RLR countermeasures with a microscopic traffic simulation model and surrogate safety assessment model (SSAM); (3) apply the proposed methodology to actual signalized intersection in Virginia, with the most prevalent scenarios-increasing the yellow signal interval duration, installing an advance warning sign, and an RLR camera; and (4) analyze the relative effectiveness by RLR frequency and the number of conflicts (rear-end and crossing). All scenarios show a reduction in RLR frequency (-7.8, -45.5, and -52.4%, respectively), but only increasing the yellow signal interval duration results in a reduced total number of conflicts (-11.3%; a surrogate safety measure of possible RLR-related crashes). An RLR camera makes the greatest reduction (-60.9%) in crossing conflicts (a surrogate safety measure of possible angle crashes), whereas increasing the yellow signal interval duration results in only a 12.8% reduction of rear-end conflicts (a surrogate safety measure of possible rear-end crash). Although increasing the yellow signal interval duration is advantageous because this reduces the total conflicts (a possibility of total

  13. A web-based, collaborative modeling, simulation, and parallel computing environment for electromechanical systems

    Directory of Open Access Journals (Sweden)

    Xiaoliang Yin

    2015-03-01

    Full Text Available Complex electromechanical system is usually composed of multiple components from different domains, including mechanical, electronic, hydraulic, control, and so on. Modeling and simulation for electromechanical system on a unified platform is one of the research hotspots in system engineering at present. It is also the development trend of the design for complex electromechanical system. The unified modeling techniques and tools based on Modelica language provide a satisfactory solution. To meet with the requirements of collaborative modeling, simulation, and parallel computing for complex electromechanical systems based on Modelica, a general web-based modeling and simulation prototype environment, namely, WebMWorks, is designed and implemented. Based on the rich Internet application technologies, an interactive graphic user interface for modeling and post-processing on web browser was implemented; with the collaborative design module, the environment supports top-down, concurrent modeling and team cooperation; additionally, service-oriented architecture–based architecture was applied to supply compiling and solving services which run on cloud-like servers, so the environment can manage and dispatch large-scale simulation tasks in parallel on multiple computing servers simultaneously. An engineering application about pure electric vehicle is tested on WebMWorks. The results of simulation and parametric experiment demonstrate that the tested web-based environment can effectively shorten the design cycle of the complex electromechanical system.

  14. Fog Simulations Based on Multi-Model System: A Feasibility Study

    Science.gov (United States)

    Shi, Chune; Wang, Lei; Zhang, Hao; Zhang, Su; Deng, Xueliang; Li, Yaosun; Qiu, Mingyan

    2012-05-01

    Accurate forecasts of fog and visibility are very important to air and high way traffic, and are still a big challenge. A 1D fog model (PAFOG) is coupled to MM5 by obtaining the initial and boundary conditions (IC/BC) and some other necessary input parameters from MM5. Thus, PAFOG can be run for any area of interest. On the other hand, MM5 itself can be used to simulate fog events over a large domain. This paper presents evaluations of the fog predictability of these two systems for December of 2006 and December of 2007, with nine regional fog events observed in a field experiment, as well as over a large domain in eastern China. Among the simulations of the nine fog events by the two systems, two cases were investigated in detail. Daily results of ground level meteorology were validated against the routine observations at the CMA observational network. Daily fog occurrences for the two study periods was validated in Nanjing. General performance of the two models for the nine fog cases are presented by comparing with routine and field observational data. The results of MM5 and PAFOG for two typical fog cases are verified in detail against field observations. The verifications demonstrated that all methods tended to overestimate fog occurrence, especially for near-fog cases. In terms of TS/ETS, the LWC-only threshold with MM5 showed the best performance, while PAFOG showed the worst. MM5 performed better for advection-radiation fog than for radiation fog, and PAFOG could be an alternative tool for forecasting radiation fogs. PAFOG did show advantages over MM5 on the fog dissipation time. The performance of PAFOG highly depended on the quality of MM5 output. The sensitive runs of PAFOG with different IC/BC showed the capability of using MM5 output to run the 1D model and the high sensitivity of PAFOG on cloud cover. Future works should intensify the study of how to improve the quality of input data (e.g. cloud cover, advection, large scale subsidence) for the 1D

  15. Atomistic computer simulations a practical guide

    CERN Document Server

    Brazdova, Veronika

    2013-01-01

    Many books explain the theory of atomistic computer simulations; this book teaches you how to run them This introductory ""how to"" title enables readers to understand, plan, run, and analyze their own independent atomistic simulations, and decide which method to use and which questions to ask in their research project. It is written in a clear and precise language, focusing on a thorough understanding of the concepts behind the equations and how these are used in the simulations. As a result, readers will learn how to design the computational model and which parameters o

  16. Simulation of a Large Wildfire in a Coupled Fire-Atmosphere Model

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Filippi

    2018-06-01

    Full Text Available The Aullene fire devastated more than 3000 ha of Mediterranean maquis and pine forest in July 2009. The simulation of combustion processes, as well as atmospheric dynamics represents a challenge for such scenarios because of the various involved scales, from the scale of the individual flames to the larger regional scale. A coupled approach between the Meso-NH (Meso-scale Non-Hydrostatic atmospheric model running in LES (Large Eddy Simulation mode and the ForeFire fire spread model is proposed for predicting fine- to large-scale effects of this extreme wildfire, showing that such simulation is possible in a reasonable time using current supercomputers. The coupling involves the surface wind to drive the fire, while heat from combustion and water vapor fluxes are injected into the atmosphere at each atmospheric time step. To be representative of the phenomenon, a sub-meter resolution was used for the simulation of the fire front, while atmospheric simulations were performed with nested grids from 2400-m to 50-m resolution. Simulations were run with or without feedback from the fire to the atmospheric model, or without coupling from the atmosphere to the fire. In the two-way mode, the burnt area was reproduced with a good degree of realism at the local scale, where an acceleration in the valley wind and over sloping terrain pushed the fire line to locations in accordance with fire passing point observations. At the regional scale, the simulated fire plume compares well with the satellite image. The study explores the strong fire-atmosphere interactions leading to intense convective updrafts extending above the boundary layer, significant downdrafts behind the fire line in the upper plume, and horizontal wind speeds feeding strong inflow into the base of the convective updrafts. The fire-induced dynamics is induced by strong near-surface sensible heat fluxes reaching maximum values of 240 kW m − 2 . The dynamical production of turbulent kinetic

  17. Structural Uncertainty in Model-Simulated Trends of Global Gross Primary Production

    Directory of Open Access Journals (Sweden)

    Zaichun Zhu

    2013-03-01

    Full Text Available Projected changes in the frequency and severity of droughts as a result of increase in greenhouse gases have a significant impact on the role of vegetation in regulating the global carbon cycle. Drought effect on vegetation Gross Primary Production (GPP is usually modeled as a function of Vapor Pressure Deficit (VPD and/or soil moisture. Climate projections suggest a strong likelihood of increasing trend in VPD, while regional changes in precipitation are less certain. This difference in projections between VPD and precipitation can cause considerable discrepancies in the predictions of vegetation behavior depending on how ecosystem models represent the drought effect. In this study, we scrutinized the model responses to drought using the 30-year record of Global Inventory Modeling and Mapping Studies (GIMMS 3g Normalized Difference Vegetation Index (NDVI dataset. A diagnostic ecosystem model, Terrestrial Observation and Prediction System (TOPS, was used to estimate global GPP from 1982 to 2009 under nine different experimental simulations. The control run of global GPP increased until 2000, but stayed constant after 2000. Among the simulations with single climate constraint (temperature, VPD, rainfall and solar radiation, only the VPD-driven simulation showed a decrease in 2000s, while the other scenarios simulated an increase in GPP. The diverging responses in 2000s can be attributed to the difference in the representation of the impact of water stress on vegetation in models, i.e., using VPD and/or precipitation. Spatial map of trend in simulated GPP using GIMMS 3g data is consistent with the GPP driven by soil moisture than the GPP driven by VPD, confirming the need for a soil moisture constraint in modeling global GPP.

  18. Network Simulation of Technical Architecture

    National Research Council Canada - National Science Library

    Cave, William

    1998-01-01

    ..., and development of the Army Battle Command System (ABCS). PSI delivered a hierarchical iconic modeling facility that can be used to structure and restructure both models and scenarios, interactively, while simulations are running...

  19. Non-linear belt transient analysis. A hybrid model for numerical belt conveyor simulation

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, A. [Scientific Solutions, Inc., Aurora, CO (United States)

    2008-07-01

    Frictional and rolling losses along a running conveyor are discussed due to their important influence on wave propagation during starting and stopping. Hybrid friction models allow belt rubber losses and material flexing to be included in the initial tension calculations prior to any dynamic analysis. Once running tensions are defined, a numerical integration method using non-linear stiffness gradients is used to generate transient forces during starting and stopping. A modified Euler integration technique is used to simulate the entire starting and stopping cycle in less than 0.1 seconds. The procedure enables a faster scrutiny of unforeseen conveyor design issues such as low belt tension zones and high forces at drives. (orig.)

  20. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    Energy Technology Data Exchange (ETDEWEB)

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle

  1. Quasi-decadal Oscillation in the CMIP5 and CMIP3 Climate Model Simulations: California Case

    Science.gov (United States)

    Wang, J.; Yin, H.; Reyes, E.; Chung, F. I.

    2014-12-01

    The ongoing three drought years in California are reminding us of two other historical long drought periods: 1987-1992 and 1928-1934. This kind of interannual variability is corresponding to the dominating 7-15 yr quasi-decadal oscillation in precipitation and streamflow in California. When using global climate model projections to assess the climate change impact on water resources planning in California, it is natural to ask if global climate models are able to reproduce the observed interannual variability like 7-15 yr quasi-decadal oscillation. Further spectral analysis to tree ring retrieved precipitation and historical precipitation record proves the existence of 7-15 yr quasi-decadal oscillation in California. But while implementing spectral analysis to all the CMIP5 and CMIP3 global climate model historical simulations using wavelet analysis approach, it was found that only two models in CMIP3 , CGCM 2.3.2a of MRI and NCAP PCM1.0, and only two models in CMIP5, MIROC5 and CESM1-WACCM, have statistically significant 7-15 yr quasi-decadal oscillations in California. More interesting, the existence of 7-15 yr quasi-decadal oscillation in the global climate model simulation is also sensitive to initial conditions. 12-13 yr quasi-decadal oscillation occurs in one ensemble run of CGCM 2.3.2a of MRI but does not exist in the other four ensemble runs.

  2. Results from pion calibration runs for the H1 liquid argon calorimeter and comparisons with simulations

    International Nuclear Information System (INIS)

    Andrieu, B.; Ban, J.; Barrelet, E.; Bergstein, H.; Bernardi, G.; Besancon, M.; Binder, E.; Blume, H.; Borras, K.; Boudry, V.; Brasse, F.; Braunschweig, W.; Brisson, V.; Campbell, A.J.; Carli, T.; Colombo, M.; Coutures, C.; Cozzika, G.; David, M.; Delcourt, B.; DelBuono, L.; Devel, M.; Dingus, P.; Drescher, A.; Duboc, J.; Duenger, O.; Ebbinghaus, R.; Egli, S.; Ellis, N.N.; Feltesse, J.; Feng, Y.; Ferrarotto, F.; Flauger, W.; Flieser, M.; Gamerdinger, K.; Gayler, J.; Godfrey, L.; Goerlich, L.; Goldberg, M.; Graessler, R.; Greenshaw, T.; Greif, H.; Haguenauer, M.; Hajduk, L.; Hamon, O.; Hartz, P.; Haustein, V.; Haydar, R.; Hildesheim, W.; Huot, N.; Jabiol, M.A.; Jacholkowska, A.; Jaffre, M.; Jung, H.; Just, F.; Kiesling, C.; Kirchhoff, T.; Kole, F.; Korbel, V.; Korn, M.; Krasny, W.; Kubenka, J.P.; Kuester, H.; Kurzhoefer, J.; Kuznik, B.; Lander, R.; Laporte, J.F.; Lenhardt, U.; Loch, P.; Lueers, D.; Marks, J.; Martyniak, J.; Merz, T.; Naroska, B.; Nau, A.; Nguyen, H.K.; Niebergall, F.; Oberlack, H.; Obrock, U.; Ould-Saada, F.; Pascaud, C.; Pyo, H.B.; Rauschnabel, K.; Ribarics, P.; Rietz, M.; Royon, C.; Rusinov, V.; Sahlmann, N.; Sanchez, E.; Schacht, P.; Schleper, P.; Schlippe, W. von; Schmidt, C.; Schmidt, D.; Shekelyan, V.; Shooshtari, H.; Sirois, Y.; Staroba, P.; Steenbock, M.; Steiner, H.; Stella, B.; Straumann, U.; Turnau, J.; Tutas, J.; Urban, L.; Vallee, C.; Vecko, M.; Verrecchia, P.; Villet, G.; Vogel, E.; Wagener, A.; Wegener, D.; Wegner, A.; Wellisch, H.P.; Yiou, T.P.; Zacek, J.; Zeitnitz, Ch.; Zomer, F.

    1993-01-01

    We present results on calibration runs performed with pions at CERN SPS for different modules of the H1 liquid argon calorimeter which consists of an electromagnetic section with lead absorbers and a hadronic section with steel absorbers. The data cover an energy range from 3.7 to 205 GeV. Detailed comparisons of the data and simulation with GHEISHA 8 in the framework of GEANT 3.14 are presented. The measured pion induced shower profiles are well described by the simulation. The total signal of pions on an energy scale determined from electron measurements is reproduced to better than 3% in various module configurations. After application of weighting functions, determined from Monte Carlo data and needed to achieve compensation, the reconstructed measured energies agree with simulation to about 3%. The energies of hadronic showers are reconstructed with a resolution of about 50%/√E + 2%. This result is achieved by inclusion of signals from an iron streamer tube tail catcher behind the liquid argon stacks. (orig.)

  3. The ACPI Climate Change Simulations

    International Nuclear Information System (INIS)

    Dai, A.; Washington, W.M.; Meehl, G.A.; Bettge, T.W.; Strand, W.G.

    2004-01-01

    The Parallel Climate Model (PCM) has been used in the Accelerated Climate Prediction Initiative (ACPI) Program to simulate the global climate response to projected CO2, sulfate, and other greenhouse gas forcing under a business-as-usual emissions scenario during the 21st century. In these runs, the oceans were initialized to 1995 conditions by a group from the Scripps Institution of Oceanography and other institutions. An ensemble of three model runs was then carried out to the year 2099 using the projected forcing. Atmospheric data from these runs were saved at 6-hourly intervals (hourly for certain critical fields) to support the ACPI objective of accurately modeling hydrological cycles over the western U.S. It is shown that the initialization to 1995 conditions partly removes the un-forced oceanic temperature and salinity drifts that occurred in the standard 20th century integration. The ACPI runs show a global surface temperature increase of 3-8C over northern high-latitudes by the end of the 21st century, and 1-2C over the oceans. This is generally within ±0.1C of model runs without the 1995 ocean initialization. The exception is in the Antarctic circumpolar ocean where surface air temperature is cooler in the ACPI run; however the ensemble scatter is large in this region. Although the difference in climate at the end of the 21st century is minimal between the ACPI runs and traditionally spun up runs, it might be larger for CGCMs with higher climate sensitivity or larger ocean drifts. Our results suggest that the effect of small errors in the oceans (such as those associated with climate drifts) on CGCM-simulated climate changes for the next 50-100 years may be negligible

  4. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  5. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  6. Methodology for the LABIHS PWR simulator modernization

    International Nuclear Information System (INIS)

    Jaime, Guilherme D.G.; Oliveira, Mauro V.

    2011-01-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  7. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  8. Discrete event simulation model for external yard choice of import container terminal in a port buffer area

    Science.gov (United States)

    Rusgiyarto, Ferry; Sjafruddin, Ade; Frazila, Russ Bona; Suprayogi

    2017-06-01

    Increasing container traffic and land acquisition problem for terminal expansion leads to usage of external yard in a port buffer area. This condition influenced the terminal performance because a road which connects the terminal and the external yard was also used by non-container traffic. Location choice problem considered to solve this condition, but the previous research has not taken account a stochastic condition of container arrival rate and service time yet. Bi-level programming framework was used to find optimum location configuration. In the lower-level, there was a problem to construct the equation, which correlated the terminal operation and the road due to different time cycle equilibrium. Container moves from the quay to a terminal gate in a daily unit of time, meanwhile, it moves from the terminal gate to the external yard through the road in a minute unit of time. If the equation formulated in hourly unit equilibrium, it cannot catch up the container movement characteristics in the terminal. Meanwhile, if the equation formulated in daily unit equilibrium, it cannot catch up the road traffic movement characteristics in the road. This problem can be addressed using simulation model. Discrete Event Simulation Model was used to simulate import container flow processes in the container terminal and external yard. Optimum location configuration in the upper-level was the combinatorial problem, which was solved by Full Enumeration approach. The objective function of the external yard location model was to minimize user transport cost (or time) and to maximize operator benefit. Numerical experiment was run for the scenario assumption of two container handling ways, three external yards, and thirty-day simulation periods. Jakarta International Container Terminal (JICT) container characteristics data was referred for the simulation. Based on five runs which were 5, 10, 15, 20, and 30 repetitions, operation one of three available external yards (external yard

  9. Wind Turbine Generator Modeling and Simulation Where Rotational Speed is the Controlled Variable

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Blaabjerg, Frede; Boldea, Ion

    2004-01-01

    the interaction between a wind turbine and the power system. The model is intended to simulate the behaviour of the wind turbine using induction generators both during normal operation. Sample simulation results for two induction generators (2/0.5 MW) validate the fundamental issues.......To optimise the power produced in a wind turbine, the speed of the turbine should vary with the wind speed. A simple control method is proposed that will allow an induction machine to run a turbine at its maximum power coefficient. Various types of power control strategies have been suggested...... for application in variable speed wind turbines. The usual strategy is to control the power or the torque acting on the wind turbine shafts. This paper presents an alternative control strategy, where the rotational speed is the controlled variable. The paper describes a model, which is being developed to simulate...

  10. Description of the grout system dynamic simulation

    International Nuclear Information System (INIS)

    Zimmerman, B.D.

    1993-07-01

    The grout system dynamic computer simulation was created to allow investigation of the ability of the grouting system to meet established milestones, for various assumed system configurations and parameters. The simulation simulates the movement of tank waste through the system versus time, from initial storage tanks, through feed tanks and the grout plant, then finally to a grout vault. The simulation properly accounts for the following (1) time required to perform various actions or processes, (2) delays involved in gaining regulatory approval, (3) random system component failures, (4) limitations on equipment capacities, (5) available parallel components, and (6) different possible strategies for vault filling. The user is allowed to set a variety of system parameters for each simulation run. Currently, the output of a run primarily consists of a plot of projected grouting campaigns completed versus time, for comparison with milestones. Other outputs involving any model component can also be quickly created or deleted as desired. In particular, sensitivity runs where the effect of varying a model parameter (flow rates, delay times, number of feed tanks available, etc.) on the ability of the system to meet milestones can be made easily. The grout system simulation was implemented using the ITHINK* simulation language for Macintosh** computers

  11. Development of a Computational Simulation Model for Conflict Management in Team Building

    Directory of Open Access Journals (Sweden)

    W. M. Wang

    2011-05-01

    Full Text Available Conflict management is one of the most important issues in leveraging organizational competitiveness. However, traditional social scientists built theories or models in this area which were mostly expressed in words and diagrams are insufficient. Social science research based on computational modeling and simulation is beginning to augment traditional theory building. Simulation provides a method for people to try their actions out in a way that is cost effective, faster, appropriate, flexible, and ethical. In this paper, a computational simulation model for conflict management in team building is presented. The model is designed and used to explore the individual performances related to the combination of individuals who have a range of conflict handling styles, under various types of resources and policies. The model is developed based on agent-based modeling method. Each of the agents has one of the five conflict handling styles: accommodation, compromise, competition, contingency, and learning. There are three types of scenarios: normal, convex, and concave. There are two types of policies: no policy, and a reward and punishment policy. Results from running the model are also presented. The simulation has led us to derive two implications concerning conflict management. First, a concave type of resource promotes competition, while convex type of resource promotes compromise and collaboration. Second, the performance ranking of different styles can be influenced by introducing different policies. On the other hand, it is possible for us to promote certain style by introducing different policies.

  12. Modeling and simulation of dust behaviors behind a moving vehicle

    Science.gov (United States)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust

  13. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; Liu Crouch, Feifei; Jacob, Robert L.; Moyer, Elisabeth J.

    2014-01-01

    functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures

  14. Algebraic modeling and thermodynamic design of fan-supplied tube-fin evaporators running under frosting conditions

    International Nuclear Information System (INIS)

    Ribeiro, Rafael S.; Hermes, Christian J.L.

    2014-01-01

    In this study, the method of entropy generation minimization (i.e., design aimed at facilitating both heat, mass and fluid flows) is used to assess the evaporator design (aspect ratio and fin density) considering the thermodynamic losses due to heat and mass transfer, and viscous flow processes. A fully algebraic model was put forward to simulate the thermal-hydraulic behavior of tube-fin evaporator coils running under frosting conditions. The model predictions were validated against experimental data, showing a good agreement between calculated and measured counterparts. The optimization exercise has pointed out that high aspect ratio heat exchanger designs lead to lower entropy generation in cases of fixed cooling capacity and air flow rate constrained by the characteristic curve of the fan. - Highlights: • An algebraic model for frost accumulation on tube-fin heat exchangers was advanced. • Model predictions for cooling capacity and air flow rate were compared with experimental data, with errors within ±5% band. • Minimum entropy generation criterion was used to optimize the evaporator geometry. • Thermodynamic analysis led to slender designs for fixed cooling capacity and fan characteristics

  15. Mars Tumbleweed Simulation Using Singular Perturbation Theory

    Science.gov (United States)

    Raiszadeh, Behzad; Calhoun, Phillip

    2005-01-01

    The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.

  16. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  17. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    Science.gov (United States)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.

  18. MATHEMATICAL MODEL FOR THE SIMULATION OF WATER QUALITY IN RIVERS USING THE VENSIM PLE® SOFTWARE

    Directory of Open Access Journals (Sweden)

    Julio Cesar de S. I. Gonçalves

    2013-06-01

    Full Text Available Mathematical modeling of water quality in rivers is an important tool for the planning and management of water resources. Nevertheless, the available models frequently show structural and functional limitations. With the objective of reducing these drawbacks, a new model has been developed to simulate water quality in rivers under unsteady conditions; this model runs on the Vensim PLE® software and can also be operated for steady-state conditions. The following eighteen water quality variables can be simulated: DO, BODc, organic nitrogen (No, ammonia nitrogen (Na, nitrite (Ni, nitrate (Nn, organic and inorganic phosphorus (Fo and Fi, respectively, inorganic solids (Si, phytoplankton (F, zooplankton (Z, bottom algae (A, detritus (D, total coliforms (TC, alkalinity (Al., total inorganic carbon (TIC, pH, and temperature (T. Methane as well as nitrogen and phosphorus compounds that are present in the aerobic and anaerobic layers of the sediment can also be simulated. Several scenarios were generated for computational simulations produced using the new model by using the QUAL2K program, and, when possible, analytical solutions. The results obtained using the new model strongly supported the results from the QUAL family and analytical solutions.

  19. A dynamic simulation model of the Savannah River Site high level waste complex

    International Nuclear Information System (INIS)

    Gregory, M.V.; Aull, J.E.; Dimenna, R.A.

    1994-01-01

    A detailed, dynamic simulation entire high level radioactive waste complex at the Savannah River Site has been developed using SPEEDUP(tm) software. The model represents mass transfer, evaporation, precipitation, sludge washing, effluent treatment, and vitrification unit operation processes through the solution of 7800 coupled differential and algebraic equations. Twenty-seven discrete chemical constituents are tracked through the unit operations. The simultaneous simultaneous simulation of concurrent batch and continuous processes is achieved by several novel, customized SPEEDUP(tm) algorithms. Due to the model's computational burden, a high-end work station is required: simulation of a years operation of the complex requires approximately three CPU hours on an IBM RS/6000 Model 590 processor. The model will be used to develop optimal high level waste (HLW) processing strategies over a thirty year time horizon. It will be employed to better understand the dynamic inter-relationships between different HLW unit operations, and to suggest strategies that will maximize available working tank space during the early years of operation and minimize overall waste processing cost over the long-term history of the complex. Model validation runs are currently underway with comparisons against actual plant operating data providing an excellent match

  20. BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994

    Science.gov (United States)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.

  1. Coupled model simulations of climate changes in the 20th century and beyond

    Science.gov (United States)

    Yu, Yongqiang; Zhi, Hai; Wang, Bin; Wan, Hui; Li, Chao; Liu, Hailong; Li, Wei; Zheng, Weipeng; Zhou, Tianjun

    2008-07-01

    Several scenario experiments of the IPCC 4th Assessment Report (AR4) are performed by version g1.0 of a Flexible coupled Ocean-Atmosphere-Land System Model (FGOALS) developed at the Institute of Atmospheric Physics, Chinese Academy of Sciences (IAP/CAS), including the “Climate of the 20th century experiment”, “CO2 1% increase per year to doubling experiment” and two separate IPCC greenhouse gases emission scenarios A1B and B1 experiments. To distinguish between the different impacts of natural variations and human activities on the climate change, three-member ensemble runs are performed for each scenario experiment. The coupled model simulations show: (1) from 1900 to 2000, the global mean temperature increases about 0.5°C and the major increase occurs during the later half of the 20th century, which is in consistent with the observations that highlights the coupled model’s ability to reproduce the climate changes since the industrial revolution; (2) the global mean surface air temperature increases about 1.6°C in the CO2 doubling experiment and 1.5°C and 2.4°C in the A1B and B1 scenarios, respectively. The global warming is indicated by not only the changes of the surface temperature and precipitation but also the temperature increase in the deep ocean. The thermal expansion of the sea water would induce the rise of the global mean sea level. Both the control run and the 20th century climate change run are carried out again with version g1.1 of FGOALS, in which the cold biases in the high latitudes were removed. They are then compared with those from version g1.0 of FGOALS in order to distinguish the effect of the model biases on the simulation of global warming.

  2. Model simulation of the Manasquan water-supply system in Monmouth County, New Jersey

    Science.gov (United States)

    Chang, Ming; Tasker, Gary D.; Nieswand, Steven

    2001-01-01

    Model simulation of the Manasquan Water Supply System in Monmouth County, New Jersey, was completed using historic hydrologic data to evaluate the effects of operational and withdrawal alternatives on the Manasquan reservoir and pumping system. Changes in the system operations can be simulated with the model using precipitation forecasts. The Manasquan Reservoir system model operates by using daily streamflow values, which were reconstructed from historical U.S. Geological Survey streamflow-gaging station records. The model is able to run in two modes--General Risk analysis Model (GRAM) and Position Analysis Model (POSA). The GRAM simulation procedure uses reconstructed historical streamflow records to provide probability estimates of certain events, such as reservoir storage levels declining below a specific level, when given an assumed set of operating rules and withdrawal rates. POSA can be used to forecast the likelihood of specified outcomes, such as streamflows falling below statutory passing flows, associated with a specific working plan for the water-supply system over a period of months. The user can manipulate the model and generate graphs and tables of streamflows and storage, for example. This model can be used as a management tool to facilitate the development of drought warning and drought emergency rule curves and safe yield values for the water-supply system.

  3. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  4. mr. A C++ library for the matching and running of the Standard Model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kniehl, Bernd A.; Veretin, Oleg L. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Pikelner, Andrey F. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2016-01-15

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  5. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  6. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  7. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  8. Using Active Learning for Speeding up Calibration in Simulation Models.

    Science.gov (United States)

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  9. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  10. Designing Crop Simulation Web Service with Service Oriented Architecture Principle

    Science.gov (United States)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.

    2015-12-01

    Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting

  11. Wheel-running in a transgenic mouse model of Alzheimer's disease: protection or symptom?

    Science.gov (United States)

    Richter, Helene; Ambrée, Oliver; Lewejohann, Lars; Herring, Arne; Keyvani, Kathy; Paulus, Werner; Palme, Rupert; Touma, Chadi; Schäbitz, Wolf-Rüdiger; Sachser, Norbert

    2008-06-26

    Several studies on both humans and animals reveal benefits of physical exercise on brain function and health. A previous study on TgCRND8 mice, a transgenic model of Alzheimer's disease, reported beneficial effects of premorbid onset of long-term access to a running wheel on spatial learning and plaque deposition. Our study investigated the effects of access to a running wheel after the onset of Abeta pathology on behavioural, endocrinological, and neuropathological parameters. From day 80 of age, the time when Abeta deposition becomes apparent, TgCRND8 and wildtype mice were kept with or without running wheel. Home cage behaviour was analysed and cognitive abilities regarding object recognition memory and spatial learning in the Barnes maze were assessed. Our results show that, in comparison to Wt mice, Tg mice were characterised by impaired object recognition memory and spatial learning, increased glucocorticoid levels, hyperactivity in the home cage and high levels of stereotypic behaviour. Access to a running wheel had no effects on cognitive or neuropathological parameters, but reduced the amount of stereotypic behaviour in transgenics significantly. Furthermore, wheel-running was inversely correlated with stereotypic behaviour, suggesting that wheel-running may have stereotypic qualities. In addition, wheel-running positively correlated with plaque burden. Thus, in a phase when plaques are already present in the brain, it may be symptomatic of brain pathology, rather than protective. Whether or not access to a running wheel has beneficial effects on Alzheimer-like pathology and symptoms may therefore strongly depend on the exact time when the wheel is provided during development of the disease.

  12. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  13. Application of new simulation algorithms for modeling rf diagnostics of electron clouds

    International Nuclear Information System (INIS)

    Veitzer, Seth A.; Smithe, David N.; Stoltz, Peter H.

    2012-01-01

    Traveling wave rf diagnostics of electron cloud build-up show promise as a non-destructive technique for measuring plasma density and the efficacy of mitigation techniques. However, it is very difficult to derive an absolute measure of plasma density from experimental measurements for a variety of technical reasons. Detailed numerical simulations are vital in order to understand experimental data, and have successfully modeled build-up. Such simulations are limited in their ability to reproduce experimental data due to the large separation of scales inherent to the problem. Namely, one must resolve both rf frequencies in the GHz range, as well as the plasma modulation frequency of tens of MHz, while running for very long simulations times, on the order of microseconds. The application of new numerical simulation techniques allow us to bridge the simulation scales in this problem and produce spectra that can be directly compared to experiments. The first method is to use a plasma dielectric model to measure plasma-induced phase shifts in the rf wave. The dielectric is modulated at a low frequency, simulating the effects of multiple bunch crossings. This allows simulations to be performed without kinetic particles representing the plasma, which both speeds up the simulations as well as reduces numerical noise from interpolation of particle charge and currents onto the computational grid. Secondly we utilize a port boundary condition model to simultaneously absorb rf at the simulation boundaries, and to launch the rf into the simulation. This method improves the accuracy of simulations by restricting rf frequencies better than adding an external (finite) current source to drive rf, and absorbing layers at the boundaries. We also explore the effects of non-uniform plasma densities on the simulated spectra.

  14. Dynamics Analysis of Fluid-Structure Interaction for a Biologically-Inspired Biped Robot Running on Water

    Directory of Open Access Journals (Sweden)

    Linsen Xu

    2013-10-01

    Full Text Available A kinematics analysis of a biologically-inspired biped robot is carried out, and the trajectory of the robot foot is understood. For calculating the pressure distribution across a robot foot before touching the surface of water, the compression flow of air and the depression motion of the water surface are considered. The pressure model after touching the water surface has been built according to the theory of rigid body planar motion. The multi-material ALE algorithm is applied to emulate the course of the foot slapping water. The simulation results indicate that the model of the bionic robot can satisfy the water-running function. The real prototype of the robot is manufactured to test its function of running on water. When the biped robot is running on water, the average force generated by the propulsion mechanism is about 1.3N. The experimental results show that the propulsion system can satisfy the requirement of biped robot running on water.

  15. A station blackout simulation for the Advanced Neutron Source Reactor using the integrated primary and secondary system model

    International Nuclear Information System (INIS)

    Schneider, E.A.

    1994-01-01

    The Advanced Neutron Source Reactor (ANSR) is a research reactor to be built at Oak Ridge National Laboratory. This paper deals with thermal-hydraulic analysis of ANSR's cooling systems during nominal and transient conditions, with the major effort focusing upon the construction and testing of computer models of the reactor's primary, secondary and reflector vessel cooling systems. The code RELAP5 was used to simulate transients, such as loss of coolant accidents and loss of off-site power, as well as to model the behavior of the reactor in steady state. Three stages are involved in constructing and using a RELAP5 model: (1) construction and encoding of the desired model, (2) testing and adjustment of the model until a satisfactory steady state is achieved, and (3) running actual transients using the steady-state results obtained earlier as initial conditions. By use of the ANSR design specifications, a model of the reactor's primary and secondary cooling systems has been constructed to run a transient simulating a loss of off-site power. This incident assumes a pump coastdown in both the primary and secondary loops. The results determine whether the reactor can survive the transition from forced convection to natural circulation

  16. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    Science.gov (United States)

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with

  17. Comparison between a coupled 1D-2D model and a fully 2D model for supercritical flow simulation in crossroads

    KAUST Repository

    Ghostine, Rabih

    2014-12-01

    In open channel networks, flow is usually approximated by the one-dimensional (1D) Saint-Venant equations coupled with an empirical junction model. In this work, a comparison in terms of accuracy and computational cost between a coupled 1D-2D shallow water model and a fully two-dimensional (2D) model is presented. The paper explores the ability of a coupled model to simulate the flow processes during supercritical flows in crossroads. This combination leads to a significant reduction in the computational time, as a 1D approach is used in branches and a 2D approach is employed in selected areas only where detailed flow information is essential. Overall, the numerical results suggest that the coupled model is able to accurately simulate the main flow processes. In particular, hydraulic jumps, recirculation zones, and discharge distribution are reasonably well reproduced and clearly identified. Overall, the proposed model leads to a 30% reduction in run times. © 2014 International Association for Hydro-Environment Engineering and Research.

  18. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    Science.gov (United States)

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system

  19. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  20. The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.; Lin, Alexander S.

    2016-01-01

    This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.

  1. The long-run forecasting of energy prices using the model of shifting trend

    International Nuclear Information System (INIS)

    Radchenko, Stanislav

    2005-01-01

    Developing models for accurate long-term energy price forecasting is an important problem because these forecasts should be useful in determining both supply and demand of energy. On the supply side, long-term forecasts determine investment decisions of energy-related companies. On the demand side, investments in physical capital and durable goods depend on price forecasts of a particular energy type. Forecasting long-run rend movements in energy prices is very important on the macroeconomic level for several developing countries because energy prices have large impacts on their real output, the balance of payments, fiscal policy, etc. Pindyck (1999) argues that the dynamics of real energy prices is mean-reverting to trend lines with slopes and levels that are shifting unpredictably over time. The hypothesis of shifting long-term trend lines was statistically tested by Benard et al. (2004). The authors find statistically significant instabilities for coal and natural gas prices. I continue the research of energy prices in the framework of continuously shifting levels and slopes of trend lines started by Pindyck (1999). The examined model offers both parsimonious approach and perspective on the developments in energy markets. Using the model of depletable resource production, Pindyck (1999) argued that the forecast of energy prices in the model is based on the long-run total marginal cost. Because the model of a shifting trend is based on the competitive behavior, one may examine deviations of oil producers from the competitive behavior by studying the difference between actual prices and long-term forecasts. To construct the long-run forecasts (10-year-ahead and 15-year-ahead) of energy prices, I modify the univariate shifting trends model of Pindyck (1999). I relax some assumptions on model parameters, the assumption of white noise error term, and propose a new Bayesian approach utilizing a Gibbs sampling algorithm to estimate the model with autocorrelation. To

  2. Assessing the Uncertainty of Tropical Cyclone Simulations in NCAR's Community Atmosphere Model

    Directory of Open Access Journals (Sweden)

    Kevin A Reed

    2011-08-01

    Full Text Available The paper explores the impact of the initial-data, parameter and structural model uncertainty on the simulation of a tropical cyclone-like vortex in the National Center for Atmospheric Research's (NCAR Community Atmosphere Model (CAM. An analytic technique is used to initialize the model with an idealized weak vortex that develops into a tropical cyclone over ten simulation days. A total of 78 ensemble simulations are performed at horizontal grid spacings of 1.0°, 0.5° and 0.25° using two recently released versions of the model, CAM 4 and CAM 5. The ensemble members represent simulations with random small-amplitude perturbations of the initial conditions, small shifts in the longitudinal position of the initial vortex and runs with slightly altered model parameters. The main distinction between CAM 4 and CAM 5 lies within the physical parameterization suite, and the simulations with both CAM versions at the varying resolutions assess the structural model uncertainty. At all resolutions storms are produced with many tropical cyclone-like characteristics. The CAM 5 simulations exhibit more intense storms than CAM 4 by day 10 at the 0.5° and 0.25° grid spacings, while the CAM 4 storm at 1.0° is stronger. There are also distinct differences in the shapes and vertical profiles of the storms in the two variants of CAM. The ensemble members show no distinction between the initial-data and parameter uncertainty simulations. At day 10 they produce ensemble root-mean-square deviations from an unperturbed control simulation on the order of 1--5 m s-1 for the maximum low-level wind speed and 2--10 hPa for the minimum surface pressure. However, there are large differences between the two CAM versions at identical horizontal resolutions. It suggests that the structural uncertainty is more dominant than the initial-data and parameter uncertainties in this study. The uncertainty among the ensemble members is assessed and quantified.

  3. Human and avian running on uneven ground: a model-based comparison

    OpenAIRE

    Müller, R.; Birn-Jeffery, A. V.; Blum, Y.

    2016-01-01

    Birds and humans are successful bipedal runners, who have individually evolved bipedalism, but the extent of the similarities and differences of their bipedal locomotion is unknown. In turn, the anatomical differences of their locomotor systems complicate direct comparisons. However, a simplifying mechanical model, such as the conservative spring–mass model, can be used to describe both avian and human running and thus, provides a way to compare the locomotor strategies that birds and humans ...

  4. Experiment data of ROSA-III integral test RUN 710

    International Nuclear Information System (INIS)

    Koizumi, Yasuo; Tasaka, Kanji; Adachi, Hiromichi; Anoda, Yoshinari; Soda, Kunihisa

    1981-01-01

    The report presents data of RUN 710 at ROSA-III test facility. RUN 710 simulates a 200% double-ended break at the inlest side of a recirculation pump of a BWR. All ECCS are activated and electric power to simulated fuel rods in one core channel among four is not supplied in RUN 710. The primary initial conditions are steam dome pressure 7.35 MPa, lower plenum subcooling 10.8 K, core inlet flow rate 31.3 kg/s and core heat generation 2.42 MW. Peak cladding temperature is 609 K at Position 3, 352.5 mm above the mid plane of the core. All heater rods are quenched after ECCS actuation and the effectiveness of ECCS is confirmed. (author)

  5. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  6. Multi-agent modeling for the simulation of a simple smart microgrid

    International Nuclear Information System (INIS)

    Kremers, Enrique; Gonzalez de Durana, Jose; Barambones, Oscar

    2013-01-01

    Highlights: • We created a systemic modular model for a microgrid with a load flow calculation. • The model is modular and besides the power devices includes also a communication layer. • An agent-based approach allows to include intelligent strategies on every node of the system. • First feasibility simulations were run to show the possible outcomes of generator and load management strategies. - Abstract: The smart grid is a highly complex system that is being formed from the traditional power grid, adding new and sophisticated communication and control devices. This will enable integrating new elements for distributed power generation and also achieving an increasingly automated operation so for actions of the utilities as for customers. In order to model such systems, a bottom-up method is followed, using only a few basic elements which are structured into two layers: a physical layer for the electrical power transmission and one logical layer for element communication. A simple case study is presented to analyze the possibilities of simulation. It shows a microgrid model with dynamic load management and an integrated approach that can process both electrical and communication flows

  7. Development of the Real-time Core and Thermal-Hydraulic Models for Kori-1 Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Hyuk; Lee, Myeong Soo; Hwang, Do Hyun; Byon, Soo Jin [KEPRI, Daejeon (Korea, Republic of)

    2010-10-15

    The operation of the Kori-Unit 1 (1723.5MWt) is expanded to additional 10 years with upgrades of the Main Control Room (MCR). Therefore, the revision of the procedures, performance tests and works related with the exchange of the Main Control Board (MCB) are currently carried out. And as a part of it, the fullscope simulator for the Kori-1 is being developed for the purpose of the pre-operation and emergence response capability for the operators. The purpose of this paper is to report on the performance of the developed neutronics and thermal-hydraulic (TH) models of Kori Unit 1 simulator. The neutronics model is based on the NESTLE code and TH model based on the RELAP5/MOD3 thermal-hydraulics analysis code which was funded as FY-93 LDRD Project 7201 and is running on the commercial simulator environment tool (the 3KeyMaster{sup TM} of the WSC). As some examples for the verification of the developed neutronics and TH models, some figures are provided. The outputs of the developed neutronics and TH models are in accord with the Nuclear Design Report (NDR) and Final Safety Analysis Report (FSAR) of the reference plant

  8. Dezenflasyon Sürecinde Türkiye’de Enflasyonun Uzun ve Kısa Dönem Dinamiklerinin Modellenmesi(Modelling The Long Run and The Short Run Dynamics of Inflation In The Disinflation Process In Turkey

    Directory of Open Access Journals (Sweden)

    Macide ÇİÇEK

    2005-01-01

    Full Text Available In this study, it is employed that Expectations-Augmented Philips Curve Model to investigate the link between inflation and unit labor costs, output gap (proxy for demand shocks, real exchange rate (proxy for supply shocks and price expectations for Turkey using monthly data from 2000:01 to 2004:12. The methodology employed in this paper uses unit root test, Johansen Cointegration Test to examine the existence of possible long run relationships among the variables included in the model and a single equation error correction model for the inflation equation estimated by OLS to examine the short run dynamics of inflation, respectively. It is find that in the long run, mark-up behaviour of output prices over unit labor costs is the main cause of inflation, real exchange rate has a rather big impact on reduced inflation and demand shocks don’t led to an increase in prices. The short run dynamics of the inflation equation indicate that supply shocks are the determinant of inflation in the short run. It is also find that exchange rate is the variable that trigger an inflation adjustment the most rapidly in the short run.

  9. Climate change due to greenhouse effects in China as simulated by a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gao, X.J.; Zhao, Z.C.; Ding, Y.H.; Huang, R.H.; Giorgi, F. [National Climate Centre, Beijing (China)

    2001-07-01

    Impacts of greenhouse effects (2 x CO{sub 2}) upon climate change over China as simulated by a regional climate model over China (RegCM / China) have been investigated. The model was based on RegCM2 and was nested to a global coupled ocean-atmosphere model (CSIRO R21L9 AOGCM model). Results of the control run (1 x CO{sub 2}) indicated that simulations of surface air temperature and precipitation in China by RegCM are much better than that by the global coupled model because of a higher resolution. Results of sensitive experiment by RegCM with 2 x CO{sub 2} showed that the surface air temperature over China might increase remarkably due to greenhouse effect, especially in winter season and in North China. Precipitation might also increase in most parts of China due to the CO{sub 2} doubling.

  10. Insights in time dependent cross compartment sensitivities from ensemble simulations with the fully coupled subsurface-land surface-atmosphere model TerrSysMP

    Science.gov (United States)

    Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens

    2017-04-01

    Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.

  11. Finite element analysis of pedestrian lower limb fractures by direct force: the result of being run over or impact?

    Science.gov (United States)

    Li, Zhengdong; Zou, Donghua; Liu, Ningguo; Zhong, Liangwei; Shao, Yu; Wan, Lei; Huang, Ping; Chen, Yijiu

    2013-06-10

    The elucidation and prediction of the biomechanics of lower limb fractures could serve as a useful tool in forensic practices. Finite element (FE) analysis could potentially help in the understanding of the fracture mechanisms of lower limb fractures frequently caused by car-pedestrian accidents. Our aim was (1) to develop and validate a FE model of the human lower limb, (2) to assess the biomechanics of specific injuries concerning run-over and impact loading conditions, and (3) to reconstruct one real car-pedestrian collision case using the model created in this study. We developed a novel lower limb FE model and simulated three different loading scenarios. The geometry of the model was reconstructed using Mimics 13.0 based on computed tomography (CT) scans from an actual traffic accident. The material properties were based upon a synthesis of data found in published literature. The FE model validation and injury reconstruction were conducted using the LS-DYNA code. The FE model was validated by a comparison of the simulation results of three-point bending, overall lateral impact tests and published postmortem human surrogate (PMHS) results. Simulated loading scenarios of running-over the thigh with a wheel, the impact on the upper leg, and impact on the lower thigh were conducted with velocities of 10 m/s, 20 m/s, and 40 m/s, respectively. We compared the injuries resulting from one actual case with the simulated results in order to explore the possible fracture bio-mechanism. The peak fracture forces, maximum bending moments, and energy lost ratio exhibited no significant differences between the FE simulations and the literature data. Under simulated run-over conditions, the segmental fracture pattern was formed and the femur fracture patterns and mechanisms were consistent with the actual injury features of the case. Our study demonstrated that this simulation method could potentially be effective in identifying forensic cases and exploring of the injury

  12. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  13. Development and validation of the European Cluster Assimilation Techniques run libraries

    Science.gov (United States)

    Facskó, G.; Gordeev, E.; Palmroth, M.; Honkonen, I.; Janhunen, P.; Sergeev, V.; Kauristie, K.; Milan, S.

    2012-04-01

    The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: Synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The result of the 192 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation. Here we present details of these runs.

  14. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  15. System Identification Based Proxy Model of a Reservoir under Water Injection

    Directory of Open Access Journals (Sweden)

    Berihun M. Negash

    2017-01-01

    Full Text Available Simulation of numerical reservoir models with thousands and millions of grid blocks may consume a significant amount of time and effort, even when high performance processors are used. In cases where the simulation runs are required for sensitivity analysis, dynamic control, and optimization, the act needs to be repeated several times by continuously changing parameters. This makes it even more time-consuming. Currently, proxy models that are based on response surface are being used to lessen the time required for running simulations during sensitivity analysis and optimization. Proxy models are lighter mathematical models that run faster and perform in place of heavier models that require large computations. Nevertheless, to acquire data for modeling and validation and develop the proxy model itself, hundreds of simulation runs are required. In this paper, a system identification based proxy model that requires only a single simulation run and a properly designed excitation signal was proposed and evaluated using a benchmark case study. The results show that, with proper design of excitation signal and proper selection of model structure, system identification based proxy models are found to be practical and efficient alternatives for mimicking the performance of numerical reservoir models. The resulting proxy models have potential applications for dynamic well control and optimization.

  16. Development of Neutronics Model for ShinKori Unit 1 Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Hong, JinHyuk; Lee, MyeongSoo; Lee, SeungHo; Suh, JungKwan; Hwang, DoHyun [KEPRI, Daejeon (Korea, Republic of)

    2008-05-15

    ShinKori-Unit 1 and 2 is being built in the Kori site which will be operated at 2815 MWt of thermal core power. The purpose of this paper is to report on the performance of the developed neutronics model of ShinKori Unit 1 and 2. Also this report includes the convenient tool (XS2R5) for processing the large quantity of information received from the DIT/ROCS model and generating cross-sections. The neutronics model is based on the NESTLE code inserted to RELAP5/MOD3 thermal-hydraulics analysis code which was funded as FY-93 LDRD Project 7201 and is running on the commercial simulator environment tool (the 3KeyMaster{sup TM} of the WSC). As some examples for the verification of the developed neutronics model, some figures are provided. The output of the developed neutronics model is in accord with the Preliminary Safety Analysis Report (PSAR) of the reference plant.

  17. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  18. Simulation of interactions at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter [Munich Univ. (Germany). Physics Faculty

    2016-11-01

    The LHC Run-2 is planned to continue until end of 2018 and should increase the data volume by at least a factor 5 compared to Run-1. A corresponding increase of the simulated data volume is required in order to analyze and interpret the recorded data. This will allow us to determine with much better precision the properties of the Higgs Boson and either find new particles as predicted by 'New Physics' theories or further increase the constraints on these models. Using SuperMUC to simulate events will be a crucial component to reach these goals. Active development of the simulation software is ongoing in order to make the workflow more flexible and better parallelizable for smaller work-units. Adapting the software for Intel/Mic architectures is an important goal, though presumably more in the long-term after LHC Run-2 (Run-3 is planned to start in 2021). We would hope that ''SuperMUC Next Generation'' provides Intel/Mic architecture extensions.

  19. A PC-based discrete event simulation model of the Civilian Radioactive Waste Management System

    International Nuclear Information System (INIS)

    Airth, G.L.; Joy, D.S.; Nehls, J.W.

    1991-01-01

    A System Simulation Model has been developed for the Department of Energy to simulate the movement of individual waste packages (spent fuel assemblies and fuel containers) through the Civilian Radioactive Waste Management System (CRWMS). A discrete event simulation language, GPSS/PC, which runs on an IBM/PC and operates under DOS 5.0, mathematically represents the movement and processing of radioactive waste packages through the CRWMS and the interaction of these packages with the equipment in the various facilities. This model can be used to quantify the impacts of different operating schedules, operational rules, system configurations, and equipment reliability and availability considerations on the performance of processes comprising the CRWMS and how these factors combine to determine overall system performance for the purpose of making system design decisions. The major features of the System Simulation Model are: the ability to reference characteristics of the different types of radioactive waste (age, burnup, etc.) in order to make operational and/or system design decisions, the ability to place stochastic variations on operational parameters such as processing time and equipment outages, and the ability to include a rigorous simulation of the transportation system. Output from the model includes the numbers, types, and characteristics of waste packages at selected points in the CRWMS and the extent to which various resources will be utilized in order to transport, process, and emplace the waste

  20. Tabulated square-shaped source model for linear accelerator electron beam simulation.

    Science.gov (United States)

    Khaledi, Navid; Aghamiri, Mahmood Reza; Aslian, Hossein; Ameri, Ahmad

    2017-01-01

    Using this source model, the Monte Carlo (MC) computation becomes much faster for electron beams. The aim of this study was to present a source model that makes linear accelerator (LINAC) electron beam geometry simulation less complex. In this study, a tabulated square-shaped source with transversal and axial distribution biasing and semi-Gaussian spectrum was investigated. A low energy photon spectrum was added to the semi-Gaussian beam to correct the bremsstrahlung X-ray contamination. After running the MC code multiple times and optimizing all spectrums for four electron energies in three different medical LINACs (Elekta, Siemens, and Varian), the characteristics of a beam passing through a 10 cm × 10 cm applicator were obtained. The percentage depth dose and dose profiles at two different depths were measured and simulated. The maximum difference between simulated and measured percentage of depth doses and dose profiles was 1.8% and 4%, respectively. The low energy electron and photon spectrum and the Gaussian spectrum peak energy and associated full width at half of maximum and transversal distribution weightings were obtained for each electron beam. The proposed method yielded a maximum computation time 702 times faster than a complete head simulation. Our study demonstrates that there was an excellent agreement between the results of our proposed model and measured data; furthermore, an optimum calculation speed was achieved because there was no need to define geometry and materials in the LINAC head.

  1. Design of complete software GPS signal simulator with low complexity and precise multipath channel model

    Directory of Open Access Journals (Sweden)

    G. Arul Elango

    2016-09-01

    Full Text Available The need for GPS data simulators have become important due to the tremendous growth in the design of versatile GPS receivers. Commercial hardware and software based GPS simulators are expensive and time consuming. In this work, a low cost simple novel GPS L1 signal simulator is designed for testing and evaluating the performance of software GPS receiver in a laboratory environment. A typical real time paradigm, similar to actual satellite derived GPS signal is created on a computer generated scenario. In this paper, a GPS software simulator is proposed that may offer a lot of analysis and testing flexibility to the researchers and developers as it is totally software based primarily running on a laptop/personal computer without the requirement of any hardware. The proposed GPS simulator allows provision for re-configurability and test repeatability and is developed in VC++ platform to minimize the simulation time. It also incorporates Rayleigh multipath channel fading model under non-line of sight (NLOS conditions. In this work, to efficiently design the simulator, several Rayleigh fading models viz. Inverse Discrete Fourier Transform (IDFT, Filtering White Gaussian Noise (FWFN and modified Sum of Sinusoidal (SOS simulators are tested and compared in terms of accuracy of its first and second order statistical metrics, execution time and the later one is found to be as the best appropriate Rayleigh multipath model suitable for incorporating with GPS simulator. The fading model written in ‘MATLAB’ engine has been linked with software GPS simulator module enable to test GPS receiver’s functionality in different fading environments.

  2. Convective aggregation in realistic convective-scale simulations

    Science.gov (United States)

    Holloway, Christopher E.

    2017-06-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.Plain Language SummaryUnderstanding the processes that lead to the organization of tropical rainstorms is an important challenge for weather

  3. Modeling and simulation of the power demand and supply of a hydrothermal power generating system

    International Nuclear Information System (INIS)

    Pronini, R.A.

    1996-01-01

    Security of supply of electric energy is measured by the capacity to cover the energy demand and power of a supply grid. This coverage is important because the winter peak load period in Switzerland will become problematical in the near future. The objective of this research project is to analyze the ability of a power generating system to satisfy the power requirements of the corresponding supply network. The behaviour of the energy system in critical cases (loss of the largest generator, lack of available power from an external supplier or reduced capacity for energy storage) is tested for the present situation and for the rise in the annual load. The simulation of the load of the supply network is carried out by using a model developed for this project. This model is based on the analysis of half-hourly changes of load and on the statistical maximum values. The power generating system consists of nuclear generating units, hydro units with large reservoirs, run of the river installations and imported energy. Standby units such as gas turbines, spot market and coal-fired power stations are also available. Stochastic and deterministic energy and power models have been developed for the various power stations of the hydrothermal power system. In the case of nuclear power stations, a model has been developed on the basis of the output level, production losses and time and length of outages. The possible feeder streams of the run of the river installations and of the hydro units with a large reservoir are simulated using stochastic methods based on the historical values of the last 35 years. The commitment of the hydro units depends on the peak load requirements. The load and capacity over a period of several days and weeks have been simulated with stochastic models based on the Monte Carlo method and constantly (by half hour intervals) compared. In this manner each month can be simulated. (author) figs., tabs., 46 refs

  4. The effect of footwear on running performance and running economy in distance runners.

    Science.gov (United States)

    Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D

    2015-03-01

    The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.

  5. Running exercise protects the capillaries in white matter in a rat model of depression.

    Science.gov (United States)

    Chen, Lin-Mu; Zhang, Ai-Pin; Wang, Fei-Fei; Tan, Chuan-Xue; Gao, Yuan; Huang, Chun-Xia; Zhang, Yi; Jiang, Lin; Zhou, Chun-Ni; Chao, Feng-Lei; Zhang, Lei; Tang, Yong

    2016-12-01

    Running has been shown to improve depressive symptoms when used as an adjunct to medication. However, the mechanisms underlying the antidepressant effects of running are not fully understood. Changes of capillaries in white matter have been discovered in clinical patients and depression model rats. Considering the important part of white matter in depression, running may cause capillary structural changes in white matter. Chronic unpredictable stress (CUS) rats were provided with a 4-week running exercise (from the fifth week to the eighth week) for 20 minutes each day for 5 consecutive days each week. Anhedonia was measured by a behavior test. Furthermore, capillary changes were investigated in the control group, the CUS/Standard group, and the CUS/Running group using stereological methods. The 4-week running increased sucrose consumption significantly in the CUS/Running group and had significant effects on the total volume, total length, and total surface area of the capillaries in the white matter of depression rats. These results demonstrated that exercise-induced protection of the capillaries in white matter might be one of the structural bases for the exercise-induced treatment of depression. It might provide important parameters for further study of the vascular mechanisms of depression and a new research direction for the development of clinical antidepressant means. J. Comp. Neurol. 524:3577-3586, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    Science.gov (United States)

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  7. Convective Systems Over the Japan Sea: Cloud-Resolving Model Simulations

    Science.gov (United States)

    Tao, Wei-Kuo; Yoshizaki, Masanori; Shie, Chung-Lin; Kato, Teryuki

    2002-01-01

    Wintertime observations of MCSs (Mesoscale Convective Systems) over the Sea of Japan - 2001 (WMO-01) were collected from January 12 to February 1, 2001. One of the major objectives is to better understand and forecast snow systems and accompanying disturbances and the associated key physical processes involved in the formation and development of these disturbances. Multiple observation platforms (e.g., upper-air soundings, Doppler radar, wind profilers, radiometers, etc.) during WMO-01 provided a first attempt at investigating the detailed characteristics of convective storms and air pattern changes associated with winter storms over the Sea of Japan region. WMO-01 also provided estimates of the apparent heat source (Q1) and apparent moisture sink (Q2). The vertical integrals of Q1 and Q2 are equal to the surface precipitation rates. The horizontal and vertical adjective components of Q1 and Q2 can be used as large-scale forcing for the Cloud Resolving Models (CRMs). The Goddard Cumulus Ensemble (GCE) model is a CRM (typically run with a 1-km grid size). The GCE model has sophisticated microphysics and allows explicit interactions between clouds, radiation, and surface processes. It will be used to understand and quantify precipitation processes associated with wintertime convective systems over the Sea of Japan (using data collected during the WMO-01). This is the first cloud-resolving model used to simulate precipitation processes in this particular region. The GCE model-simulated WMO-01 results will also be compared to other GCE model-simulated weather systems that developed during other field campaigns (i.e., South China Sea, west Pacific warm pool region, eastern Atlantic region and central USA).

  8. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  9. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    Science.gov (United States)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  10. Development of a simulation platform for dynamic simulation and control studies of AP1000 nuclear steam supply system

    International Nuclear Information System (INIS)

    Wan, Jiashuang; Song, Hongbing; Yan, Shoujun; Sun, Jian; Zhao, Fuyu

    2015-01-01

    Highlights: • A fast-running simulation platform named NCAP was developed on a personal computer using MATLAB/Simulink. • Three types of typical operations, namely 10% step load change, 5%/min ramp load change and load follow were simulated. • NCAP predictions were compared with those obtained by CENTS for the load regulation transients. - Abstract: This paper presents the development, application and performance assessment of a fast-running NCAP (NSSS Control & Analysis Platform) in MATLAB/Simulink environment. First, a nodal core model, a lumped parameter dynamic steam generator model with moving boundary, a non-equilibrium two-regions-three-volumes pressurizer model, and the relevant pipe and plenum models were proposed based on the fundamental conservation of mass, energy and momentum. Then, these first order nonlinear models and the NSSS control systems were implemented in the Simulink by the predefined library blocks. Based on the developed NCAP, three types of typical operational transients, namely the 10% step load change, the 5%/min ramp load change and the daily load follow were simulated to study the dynamic behavior and control characteristics of the AP1000 NSSS. It has been demonstrated that the dynamic responses of the selected key parameters agree well with the general physical rules. In addition, the comparison of load regulation simulation results obtained by NCAP and CENTS shows a good agreement in terms of the changing trends. With the adoption of modular programming techniques, the NCAP facilitates easy modification and runs quickly, which easily allows the control system designer to test and compare various ideas efficiently

  11. How to run 100 meters ?

    OpenAIRE

    Aftalion, Amandine

    2016-01-01

    A paraitre dans SIAP; The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: ...

  12. Medium-term erosion simulation of an abandoned mine site using the SIBERIA landscape evolution model

    International Nuclear Information System (INIS)

    Hancock, G.R.; Willgoose, G.R.

    2000-01-01

    This study forms part of a collaborative project designed to validate the long-term erosion predictions of the SIBERIA landform evolution model on rehabilitated mine sites. The SIBERIA catchment evolution model can simulate the evolution of landforms resulting from runoff and erosion over many years. SIBERIA needs to be calibrated before evaluating whether it correctly models the observed evolution of rehabilitated mine landforms. A field study to collect data to calibrate SIBERIA was conducted at the abandoned Scinto 6 uranium mine located in the Kakadu Region, Northern Territory, Australia. The data were used to fit parameter values to a sediment loss model and a rainfall-runoff model. The derived runoff and erosion model parameter values were used in SIBERIA to simulate 50 years of erosion by concentrated flow on the batters of the abandoned site. The SIBERIA runs correctly simulated the geomorphic development of the gullies on the man-made batters of the waste rock dump. The observed gully position, depth, volume, and morphology on the waste rock dump were quantitatively compared with the SIBERIA simulations. The close similarities between the observed and simulated gully features indicate that SIBERIA can accurately predict the rate of gully development on a man-made post-mining landscape over periods of up to 50 years. SIBERIA is an appropriate model for assessment of erosional stability of rehabilitated mine sites over time spans of around 50 years. Copyright (2000) CSIRO Australia

  13. A Generic Friction Model for Radial Slider Bearing Simulation Considering Elastic and Plastic Deformation

    Directory of Open Access Journals (Sweden)

    Günter Offner

    2015-06-01

    Full Text Available The investigation of component dynamics is one of the main tasks of internal combustion engine (ICE simulation. This prediction is important in order to understand complex loading conditions, which happen in a running ICE. Due to the need for fuel saving, mechanical friction, in particular in radial slider bearings, is one important investigation target. A generic friction modeling approach for radial slider bearings, which can be applied to lubricated contact regimes, will be presented in this paper. Besides viscous friction, the approach considers in particular boundary friction. The parameterization of the friction model is done using surface material and surface roughness measurement data. Furthermore, fluid properties depending on the applied oil additives are being considered. The application of the model will be demonstrated for a typical engineering task of a connecting rod big end study to outline the effects of contact surface texture. AlSn-based and polymer coated bearing shells will be analyzed and compared with respect to friction reduction effects, running-in behavior and thermal load capabilities.

  14. Conditional Stochastic Models in Reduced Space: Towards Efficient Simulation of Tropical Cyclone Precipitation Patterns

    Science.gov (United States)

    Dodov, B.

    2017-12-01

    Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon

  15. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    Science.gov (United States)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael; Perdue, Gabriel; Wenzel, Hans; Wright, Dennis H.; Yarba, Julia

    2017-10-01

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.

  16. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    Directory of Open Access Journals (Sweden)

    Ernest Ohene Asare

    Full Text Available Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  17. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    Science.gov (United States)

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  18. Convective aggregation in realistic convective-scale simulations

    OpenAIRE

    Holloway, Christopher E.

    2017-01-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15-day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibriu...

  19. A Simulation Study of the Radiation-Induced Bystander Effect: Modeling with Stochastically Defined Signal Reemission

    Directory of Open Access Journals (Sweden)

    Kohei Sasaki

    2012-01-01

    Full Text Available The radiation-induced bystander effect (RIBE has been experimentally observed for different types of radiation, cell types, and cell culture conditions. However, the behavior of signal transmission between unirradiated and irradiated cells is not well known. In this study, we have developed a new model for RIBE based on the diffusion of soluble factors in cell cultures using a Monte Carlo technique. The model involves the signal emission probability from bystander cells following Poisson statistics. Simulations with this model show that the spatial configuration of the bystander cells agrees well with that of corresponding experiments, where the optimal emission probability is estimated through a large number of simulation runs. It was suggested that the most likely probability falls within 0.63–0.92 for mean number of the emission signals ranging from 1.0 to 2.5.

  20. Water desalination price from recent performances: Modelling, simulation and analysis

    International Nuclear Information System (INIS)

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  1. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    Science.gov (United States)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  2. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  3. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  4. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and mode-rator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  5. Numerical simulation of a sour gas flare

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, A. [Alberta Research Council, Devon, AB (Canada)

    2008-07-01

    Due to the limited amount of information in the literature on sour gas flares and the cost of conducting wind tunnel and field experiments on sour flares, this presentation presented a modelling project that predicted the effect of operating conditions on flare performance and emissions. The objectives of the project were to adapt an existing numerical model suitable for flare simulation, incorporate sulfur chemistry, and run simulations for a range of conditions typical of sour flares in Alberta. The study involved the use of modelling expertise at the University of Utah, and employed large eddy simulation (LES) methods to model open flames. The existing model included the prediction of turbulent flow field; hydrocarbon reaction chemistry; soot formation; and radiation heat transfer. The presentation addressed the unique features of the model and discussed whether LES could predict the flow field. Other topics that were presented included the results from a University of Utah comparison; challenges of the LES model; an example of a run time issue; predicting the impact of operating conditions; and the results of simulations. Last, several next steps were identified and preliminary results were provided. Future work will focus on reducing computation time and increasing information reporting. figs.

  6. State-and-transition simulation models: a framework for forecasting landscape change

    Science.gov (United States)

    Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée

    2016-01-01

    SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of

  7. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  8. Modelling and Simulation of National Electronic Product Code Network Demonstrator Project

    Science.gov (United States)

    Mo, John P. T.

    The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.

  9. Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations

    Science.gov (United States)

    Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.

    2013-12-01

    There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.

  10. Comparison of mean properties of simulated convection in a cloud-resolving model with those produced by cumulus parameterization

    Energy Technology Data Exchange (ETDEWEB)

    Dudhia, J.; Parsons, D.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    1996-04-01

    An Intensive Observation Period (IOP) of the Atmospheric Radiation Measurement (ARM) Program took place at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site from June 16-26, 1993. The National Center for Atmospheric Research (NCAR)/Penn State Mesoscale Model (MM5) has been used to simulate this period on a 60-km domain with 20- and 6.67-km nests centered on Lamont, Oklahoma. Simulations are being run with data assimilation by the nudging technique to incorporate upper-air and surface data from a variety of platforms. The model maintains dynamical consistency between the fields, while the data correct for model biases that may occur during long-term simulations and provide boundary conditions. For the work reported here the Mesoscale Atmospheric Prediction System (MAPS) of the National Ocean and Atmospheric Administration (NOAA) 3-hourly analyses were used to drive the 60-km domain while the inner domains were unforced. A continuous 10-day period was simulated.

  11. A java based simulator with user interface to simulate ventilated patients

    Directory of Open Access Journals (Sweden)

    Stehle P.

    2015-09-01

    Full Text Available Mechanical ventilation is a life-saving intervention, which despite its use on a routine basis, poses the risk of inflicting further damage to the lung tissue if ventilator settings are chosen inappropriately. Medical decision support systems may help to prevent such injuries while providing the optimal settings to reach a defined clinical goal. In order to develop and verify decision support algorithms, a test bench simulating a patient’s behaviour is needed. We propose a Java based system that allows simulation of respiratory mechanics, gas exchange and cardiovascular dynamics of a mechanically ventilated patient. The implemented models are allowed to interact and are interchangeable enabling the simulation of various clinical scenarios. Model simulations are running in real-time and show physiologically plausible results.

  12. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    Science.gov (United States)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  13. The Separatrix Algorithm for synthesis and analysis of stochastic simulations with applications in disease modeling.

    Directory of Open Access Journals (Sweden)

    Daniel J Klein

    Full Text Available Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text], stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%. Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.

  14. TESTING CMAQ CHEMISTRY SENSITIVITIES IN BASE CHASE AND EMISSION CONTROL RUNS AT SEARCH AND SOS 99 SURFACE SITES IN THE SOUTHEASTERN UNITED STATES

    Science.gov (United States)

    CMAQ was run to simulate urban conditions in the southeastern U.S. in July 1999 at 32, 8, and 2 km grid spacings. Runs were made with two older mechanisms, Carbon Bond IV (CB4) and the Regional Acid Deposition Model, version 2 (RADM2), and with the more recent California Statewid...

  15. A numerical simulation of a contrail

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L.; Boin, M.; Meinert, D. [GKSS-Forschungszentrum Geesthacht GmbH, Geesthacht (Germany)

    1997-12-31

    The formation of a contrail from an aircraft flying near the tropopause is simulated using a three-dimensional mesoscale atmospheric model including a very complex scheme of parameterized cloud microphysical processes. The model predicted ice concentrations are in very good agreement with data measured during the International Cirrus Experiment (ICE), 1989. Sensitivity simulations were run to determine humidity forcing on the life time of contrails. (author) 4 refs.

  16. A numerical simulation of a contrail

    Energy Technology Data Exchange (ETDEWEB)

    Levkov, L; Boin, M; Meinert, D [GKSS-Forschungszentrum Geesthacht GmbH, Geesthacht (Germany)

    1998-12-31

    The formation of a contrail from an aircraft flying near the tropopause is simulated using a three-dimensional mesoscale atmospheric model including a very complex scheme of parameterized cloud microphysical processes. The model predicted ice concentrations are in very good agreement with data measured during the International Cirrus Experiment (ICE), 1989. Sensitivity simulations were run to determine humidity forcing on the life time of contrails. (author) 4 refs.

  17. A conceptual framework to model long-run qualitative change in the energy system

    OpenAIRE

    Ebersberger, Bernd

    2004-01-01

    A conceptual framework to model long-run qualitative change in the energy system / A. Pyka, B. Ebersberger, H. Hanusch. - In: Evolution and economic complexity / ed. J. Stanley Metcalfe ... - Cheltenham [u.a.] : Elgar, 2004. - S. 191-213

  18. Utilization of Human-Like Pelvic Rotation for Running Robot

    Directory of Open Access Journals (Sweden)

    Takuya eOtani

    2015-07-01

    Full Text Available The spring loaded inverted pendulum (SLIP is used to model human running. It is based on a characteristic feature of human running, in which the linear-spring-like motion of the standing leg is produced by the joint stiffness of the knee and ankle. Although this model is widely used in robotics, it does not include human-like pelvic motion. In this study, we show that the pelvis actually contributes to the increase in jumping force and absorption of landing impact. On the basis of this finding, we propose a new model, SLIP2 (spring loaded inverted pendulum with pelvis, to improve running in humanoid robots. The model is composed of a body mass, a pelvis, and leg springs, and, it can control its springs while running by use of pelvic movement in the frontal plane. To achieve running motions, we developed a running control system that includes a pelvic oscillation controller to attain control over jumping power and a landing placement controller to adjust the running speed. We also developed a new running robot by using the SLIP2 model and performed hopping and running experiments to evaluate the model. The developed robot could accomplish hopping motions only by pelvic movement. The results also established that the difference between the pelvic rotational phase and the oscillation phase of the vertical mass displacement affects the jumping force. In addition, the robot demonstrated the ability to run with a foot placement controller depending on the reference running speed.

  19. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code

  20. A Study on Bipedal and Mobile Robot Behavior Through Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Nirmala Nirmala

    2015-05-01

    Full Text Available The purpose of this work is to study and analyze mobile robot behavior. In performing this, a framework is adopted and developed for mobile and bipedal robot. The robots are design, build, and run as proceed from the development of mechanical structure, electronics and control integration, and control software application. The behavior of those robots are difficult to be observed and analyzed qualitatively. To evaluate the design and behavior quality, modeling and simulation of robot structure and its task capability is performed. The stepwise procedure to robot behavior study is explained. Behavior cases study are experimented to bipedal robots, transporter robot and Autonomous Guided Vehicle (AGV developed at our institution. The experimentation are conducted on those robots by adjusting their dynamic properties and/or surrounding environment. Validation is performed by comparing the simulation result and the real robot execution. The simulation gives a more idealistic behavior execution rather than realistic one. Adjustments are performed to fine tuning simulation's parameters to provide a more realistic performance.

  1. Are There Long-Run Effects of the Minimum Wage?

    Science.gov (United States)

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  2. The 2010 Pakistan floods: high-resolution simulations with the WRF model

    Science.gov (United States)

    Viterbo, Francesca; Parodi, Antonio; Molini, Luca; Provenzale, Antonello; von Hardenberg, Jost; Palazzi, Elisa

    2013-04-01

    Estimating current and future water resources in high mountain regions with complex orography is a difficult but crucial task. In particular, the French-Italian project PAPRIKA is focused on two specific regions in the Hindu-Kush -- Himalaya -- Karakorum (HKKH)region: the Shigar basin in Pakistan, at the feet of K2, and the Khumbu valley in Nepal, at the feet of Mount Everest. In this framework, we use the WRF model to simulate precipitation and meteorological conditions with high resolution in areas with extreme orographic slopes, comparing the model output with station and satellite data. Once validated the model, we shall run a set of three future time-slices at very high spatial resolution, in the periods 2046-2050, 2071-2075 and 2096-2100, nested in different climate change scenarios (EXtreme PREcipitation and Hydrological climate Scenario Simulations -EXPRESS-Hydro project). As a prelude to this study, here we discuss the simulation of specific, high-intensity rainfall events in this area. In this paper we focus on the 2010 Pakistan floods which began in late July 2010, producing heavy monsoon rains in the Khyber Pakhtunkhwa, Sindh, Punjab and Balochistan regions of Pakistan and affecting the Indus River basin. Approximately one-fifth of Pakistan's total land area was underwater, with a death toll of about 2000 people. This event has been simulated with the WRF model (version 3.3.) in cloud-permitting mode (d01 14 km and d02 3.5 km): different convective closures and microphysics parameterization have been used. A deeper understanding of the processes responsible for this event has been gained through comparison with rainfall depth observations, radiosounding data and geostationary/polar satellite images.

  3. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  4. Western diet increases wheel running in mice selectively bred for high voluntary wheel running.

    Science.gov (United States)

    Meek, T H; Eisenmann, J C; Garland, T

    2010-06-01

    Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.

  5. Short-run and long-run dynamics of farm land allocation

    DEFF Research Database (Denmark)

    Arnberg, Søren; Hansen, Lars Gårn

    2012-01-01

    This study develops and estimates a dynamic multi-output model of farmers’ land allocation decisions that allows for the gradual adjustment of allocations that can result from crop rotation practices and quasi-fixed capital constraints. Estimation is based on micro-panel data from Danish farmers...... that include acreage, output, and variable input utilization at the crop level. Results indicate that there are substantial differences between the short-run and long-run land allocation behaviour of Danish farmers and that there are substantial differences in the time lags associated with different crops...

  6. Reduction methods and uncertainty analysis: application to a Chemistry-Transport Model for modeling and simulation of impacts

    International Nuclear Information System (INIS)

    Boutahar, Jaouad

    2004-01-01

    In an integrated impact assessment, one has to test several scenarios of the model inputs or/and to identify the effects of model input uncertainties on the model outputs. In both cases, a large number of simulations of the model is necessary. That of course is not feasible with comprehensive Chemistry-Transport Model, due to the need for huge CPU times. Two approaches may be used in order to circumvent these difficulties: The first approach consists in reducing the computational cost of the original model by building a reduced model. Two reduction techniques are used: the first method, POD, is related to the statistical behaviour of the system and is based on a proper orthogonal decomposition of the solutions. The second method, is an efficient representation of the input/output behaviour through look-up tables. It describes the output model as an expansion of finite hierarchical correlated function in terms of the input variables. The second approach is based on reducing the number of models runs required by the standard Monte Carlo methods. It characterizes the probabilistic response of the uncertain model output as an expansion of orthogonal polynomials according to model inputs uncertainties. Then the classical Monte Carlo simulation can easily be used to compute the probability density of the uncertain output. Another key point in an integrated impact assessment is to develop strategies for the reduction of emissions by computing Source/Receptor matrices for several years of simulations. We proposed here an efficient method to calculate these matrices by using the adjoint model and in particular by defining the 'representative chemical day'. All of these methods are applied to POLAIR3D, a Chemistry-Transport model developed in this thesis. (author) [fr

  7. Co-Simulation of Building Energy and Control Systems with the Building Controls Virtual Test Bed

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2010-08-22

    This article describes the implementation of the Building Controls Virtual Test Bed (BCVTB). The BCVTB is a software environment that allows connecting different simulation programs to exchange data during the time integration, and that allows conducting hardware in the loop simulation. The software architecture is a modular design based on Ptolemy II, a software environment for design and analysis of heterogeneous systems. Ptolemy II provides a graphical model building environment, synchronizes the exchanged data and visualizes the system evolution during run-time. The BCVTB provides additions to Ptolemy II that allow the run-time coupling of different simulation programs for data exchange, including EnergyPlus, MATLAB, Simulink and the Modelica modelling and simulation environment Dymola. The additions also allow executing system commands, such as a script that executes a Radiance simulation. In this article, the software architecture is presented and the mathematical model used to implement the co-simulation is discussed. The simulation program interface that the BCVTB provides is explained. The article concludes by presenting applications in which different state of the art simulation programs are linked for run-time data exchange. This link allows the use of the simulation program that is best suited for the particular problem to model building heat transfer, HVAC system dynamics and control algorithms, and to compute a solution to the coupled problem using co-simulation.

  8. Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications

    Science.gov (United States)

    Morris, Christopher I.; Ruf, Joseph H.

    2010-01-01

    Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).

  9. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    Science.gov (United States)

    George, D. L.; Iverson, R. M.

    2012-12-01

    much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  10. Detailed dynamic solid oxide fuel cell modeling for electrochemical impedance spectra simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Ph. [Laboratory of Steam Boilers and Thermal Plants, School of Mechanical Engineering, Thermal Engineering Section, National Technical University of Athens, Heroon Polytechniou 9, 15780 Athens (Greece); Panopoulos, K.D. [Institute for Solid Fuels Technology and Applications, Centre for Research and Technology Hellas, 4th km. Ptolemais-Mpodosakeio Hospital, Region of Kouri, P.O. Box 95, GR 502, 50200 Ptolemais (Greece)

    2010-08-15

    This paper presents a detailed flexible mathematical model for planar solid oxide fuel cells (SOFCs), which allows the simulation of steady-state performance characteristics, i.e. voltage-current density (V-j) curves, and dynamic operation behavior, with a special capability of simulating electrochemical impedance spectroscopy (EIS). The model is based on physico-chemical governing equations coupled with a detailed multi-component gas diffusion mechanism (Dusty-Gas Model (DGM)) and a multi-step heterogeneous reaction mechanism implicitly accounting for the water-gas-shift (WGS), methane reforming and Boudouard reactions. Spatial discretization can be applied for 1D (button-cell approximation) up to quasi-3D (full size anode supported cell in cross-flow configuration) geometries and is resolved with the finite difference method (FDM). The model is built and implemented on the commercially available modeling and simulations platform gPROMS trademark. Different fuels based on hydrogen, methane and syngas with inert diluents are run. The model is applied to demonstrate a detailed analysis of the SOFC inherent losses and their attribution to the EIS. This is achieved by means of a step-by-step analysis of the involved transient processes such as gas conversion in the main gas chambers/channels, gas diffusion through the porous electrodes together with the heterogeneous reactions on the nickel catalyst, and the double-layer current within the electrochemical reaction zone. The model is an important tool for analyzing SOFC performance fundamentals as well as for design and optimization of materials' and operational parameters. (author)

  11. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  12. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    Science.gov (United States)

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  13. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    Science.gov (United States)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.

  14. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  15. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  16. OXlearn: a new MATLAB-based simulation tool for connectionist models.

    Science.gov (United States)

    Ruh, Nicolas; Westermann, Gert

    2009-11-01

    OXlearn is a free, platform-independent MATLAB toolbox in which standard connectionist neural network models can be set up, run, and analyzed by means of a user-friendly graphical interface. Due to its seamless integration with the MATLAB programming environment, the inner workings of the simulation tool can be easily inspected and/or extended using native MATLAB commands or components. This combination of usability, transparency, and extendability makes OXlearn an efficient tool for the implementation of basic research projects or the prototyping of more complex research endeavors, as well as for teaching. Both the MATLAB toolbox and a compiled version that does not require access to MATLAB can be downloaded from http://psych.brookes.ac.uk/oxlearn/.

  17. Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation

    Directory of Open Access Journals (Sweden)

    Ming-Da Chiou

    2013-01-01

    Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.

  18. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  19. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    Science.gov (United States)

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  20. System-level modeling and simulation of the cell culture microfluidic biochip ProCell

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2010-01-01

    Microfluidic biochips offer a promising alternative to a conventional biochemical laboratory. There are two technologies for the microfluidic biochips: droplet-based and flow-based. In this paper we are interested in flow-based microfluidic biochips, where the liquid flows continuously through pre......-defined micro-channels using valves and pumps. We present an approach to the system-level modeling and simulation of a cell culture microfluidic biochip called ProCell, Programmable Cell Culture Chip. ProCell contains a cell culture chamber, which is envisioned to run 256 simultaneous experiments (viewed...

  1. Evaluation of the 7-km GEOS-5 Nature Run

    Science.gov (United States)

    Gelaro, Ronald; Putman, William M.; Pawson, Steven; Draper, Clara; Molod, Andrea; Norris, Peter M.; Ott, Lesley; Prive, Nikki; Reale, Oreste; Achuthavarier, Deepthi; hide

    2015-01-01

    This report documents an evaluation by the Global Modeling and Assimilation Office (GMAO) of a two-year 7-km-resolution non-hydrostatic global mesoscale simulation produced with the Goddard Earth Observing System (GEOS-5) atmospheric general circulation model. The simulation was produced as a Nature Run for conducting observing system simulation experiments (OSSEs). Generation of the GEOS-5 Nature Run (G5NR) was motivated in part by the desire of the OSSE community for an improved high-resolution sequel to an existing Nature Run produced by the European Centre for Medium-Range Weather Forecasts (ECMWF), which has served the community for several years. The intended use of the G5NR in this context is for generating simulated observations to test proposed observing system designs regarding new instruments and their deployments. Because NASA's interest in OSSEs extends beyond traditional weather forecasting applications, the G5NR includes, in addition to standard meteorological components, a suite of aerosol types and several trace gas concentrations, with emissions downscaled to 10 km using ancillary information such as power plant location, population density and night-light information. The evaluation exercise described here involved more than twenty-five GMAO scientists investigating various aspects of the G5NR performance, including time mean temperature and wind fields, energy spectra, precipitation and the hydrological cycle, the representation of waves, tropical cyclones and midlatitude storms, land and ocean surface characteristics, the representation and forcing effects of clouds and radiation, dynamics of the stratosphere and mesosphere, and the representation of aerosols and trace gases. Comparisons are made with observational data sets when possible, as well as with reanalyses and other long model simulations. The evaluation is broad in scope, as it is meant to assess the overall realism of basic aspects of the G5NR deemed relevant to the conduct of OSSEs

  2. An Evaluation of the Use of Simulated Annealing to Optimize Thinning Rates for Single Even-Aged Stands

    Directory of Open Access Journals (Sweden)

    Kai Moriguchi

    2015-01-01

    Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.

  3. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  4. Clinical value of virtual three-dimensional instrument and cerebral aneurysm models in the interventional preoperative simulation

    International Nuclear Information System (INIS)

    Wei Xin; Xie Xiaodong; Wang Chaohua

    2007-01-01

    Objective: To establish virtual three-dimensional instrument and cerebral aneurysm models by using three-dimensional moulding software, and to explore the effect of the models in interventional preoperative simulation. Methods: The virtual individual models including cerebral arteries and aneurysms were established by using the three-dimensional moulding software of 3D Studio MAX R3 based on standard virtual cerebral aneurysm models and individual DSA image. The virtual catheter, guide wire, stent and coil were also established. The study of interventional preoperative simulation was run in personal computer, and included 3 clinical cases. Results: The simulation results of the working angle and the moulding angle of the head of catheter and guide wire in 3 cases were identical with that of operation results. The simulation results of the requirement of number and size of coil in 1 case of anterior communicating aneurysm and 1 case of posterior communicating aneurysm were identical with that of operation results. The simulation results of coil for aneurysmal shape in 1 case of giant internal carotid artery aneurysm were more than 2 three-dimensional coils with size of 3 mm x 3 cm from the operation results, and the position of the second coil in aneurysmal neck was adjusted according to the results of real-time simulation. The results of retrospective simulation of operation procedure indicated that the simulation methods for regular and small aneurysms could become a routine simulation means but more simulation experience was needed to build up for the giant aneurysms. Conclusions: The virtual three-dimensional instrument and cerebral aneurysm models established by the general software provided a new study method for neuro-interventional preoperative simulation, and it played an important guidance role in developing neuro-interventional operation. (authors)

  5. A Lithium-Ion Battery Simulator Based on a Diffusion and Switching Overpotential Hybrid Model for Dynamic Discharging Behavior and Runtime Predictions

    Directory of Open Access Journals (Sweden)

    Lan-Rong Dung

    2016-01-01

    Full Text Available A new battery simulator based on a hybrid model is proposed in this paper for dynamic discharging behavior and runtime predictions in existing electronic simulation environments, e.g., PSIM, so it can help power circuit designers to develop and optimize their battery-powered electronic systems. The hybrid battery model combines a diffusion model and a switching overpotential model, which automatically switches overpotential resistance mode or overpotential voltage mode to accurately describe the voltage difference between battery electro-motive force (EMF and terminal voltage. Therefore, this simulator can simply run in an electronic simulation software with less computational efforts and estimate battery performances by further considering nonlinear capacity effects. A linear extrapolation technique is adopted for extracting model parameters from constant current discharging tests, so the EMF hysteresis problem is avoided. For model validation, experiments and simulations in MATLAB and PSIM environments are conducted with six different profiles, including constant loads, an interrupted load, increasing and decreasing loads and a varying load. The results confirm the usefulness and accuracy of the proposed simulator. The behavior and runtime prediction errors can be as low as 3.1% and 1.2%, respectively.

  6. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  7. A Hands-on Approach to Evolutionary Simulation

    DEFF Research Database (Denmark)

    Valente, Marco; Andersen, Esben Sloth

    2002-01-01

    in an industry (or an economy). To abbreviate we call such models NelWin models. The new system for the programming and simulation of such models is called the Laboratory for simulation development - abbreviated as Lsd. The paper is meant to allow readers to use the Lsd version of a basic NelWin model: observe...... the model content, run the simulation, interpret the results, modify the parameterisation, etc. Since the paper deals with the implementation of a fairly complex set of models in a fairly complex programming and simulation system, it does not contain full documentation of NelWin and Lsd. Instead we hope...... to give the reader a first introduction to NelWin and Lsd and inspire a further exploration of them....

  8. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  9. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  10. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  11. Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...

  12. Cryogenic process simulation

    International Nuclear Information System (INIS)

    Panek, J.; Johnson, S.

    1994-01-01

    Combining accurate fluid property databases with a commercial equation-solving software package running on a desktop computer allows simulation of cryogenic processes without extensive computer programming. Computer simulation can be a powerful tool for process development or optimization. Most engineering simulations to date have required extensive programming skills in languages such as Fortran, Pascal, etc. Authors of simulation code have also usually been responsible for choosing and writing the particular solution algorithm. This paper describes a method of simulating cryogenic processes with a commercial software package on a desktop personal computer that does not require these traditional programming tasks. Applications include modeling of cryogenic refrigerators, heat exchangers, vapor-cooled power leads, vapor pressure thermometers, and various other engineering problems

  13. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  14. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  15. Simulation of ultra-long term behavior in HLW near-field by centrifugal model test. Part 1. Development of centrifugal equipment and centrifuge model test method

    International Nuclear Information System (INIS)

    Nishimoto, Soshi; Okada, Tetsuji; Sawada, Masataka

    2011-01-01

    The objective of this paper is to develop a centrifugal equipment which can continuously be run for a long time and a model test method in order to evaluate a long term behavior which is a coupled thermo-hydro-mechanical processes in the high level wastes geological disposal repository and the neighborhood (called 'near-field'). The centrifugal equipment of CRIEPI, 'CENTURY5000-THM', developed in the present study is able to run continuously up to six months. Therefore, a long term behavior in the near-field can be simulated in a short term, for instance, the behavior for 5000 equivalent years can be simulated in six months by centrifugalizing 100 G using a 1/100 size model. We carried out a test using a nylon specimen in a centrifugal force field of 30 G and confirmed the operations of CENTURY5000-THM, control and measurement for 11 days. As the results, it was able to control the stress in the pressure vessel and measure the values of strain, temperature and pressure. And, as a result of scanning the small model of near-field including the metal overpack, bentonite buffer and rock by a medical X-rays CT scanner, the internal structure of the model was able to be evaluated when the metal artifact was reduced. From these results, the evaluation for a long term behavior of a disposal repository by the method of centrifugal model test became possible. (author)

  16. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  17. Simulating the effects of ground-water withdrawals on streamflow in a precipitation-runoff model

    Science.gov (United States)

    Zarriello, Philip J.; Barlow, P.M.; Duda, P.B.

    2004-01-01

    Precipitation-runoff models are used to assess the effects of water use and management alternatives on streamflow. Often, ground-water withdrawals are a major water-use component that affect streamflow, but the ability of surface-water models to simulate ground-water withdrawals is limited. As part of a Hydrologic Simulation Program-FORTRAN (HSPF) precipitation-runoff model developed to analyze the effect of ground-water and surface-water withdrawals on streamflow in the Ipswich River in northeastern Massachusetts, an analytical technique (STRMDEPL) was developed for calculating the effects of pumped wells on streamflow. STRMDEPL is a FORTRAN program based on two analytical solutions that solve equations for ground-water flow to a well completed in a semi-infinite, homogeneous, and isotropic aquifer in direct hydraulic connection to a fully penetrating stream. One analytical method calculates unimpeded flow at the stream-aquifer boundary and the other method calculates the resistance to flow caused by semipervious streambed and streambank material. The principle of superposition is used with these analytical equations to calculate time-varying streamflow depletions due to daily pumping. The HSPF model can readily incorporate streamflow depletions caused by a well or surface-water withdrawal, or by multiple wells or surface-water withdrawals, or both, as a combined time-varying outflow demand from affected channel reaches. These demands are stored as a time series in the Watershed Data Management (WDM) file. This time-series data is read into the model as an external source used to specify flow from the first outflow gate in the reach where these withdrawals are located. Although the STRMDEPL program can be run independently of the HSPF model, an extension was developed to run this program within GenScn, a scenario generator and graphical user interface developed for use with the HSPF model. This extension requires that actual pumping rates for each well be stored

  18. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    Science.gov (United States)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  19. Transfer, loss and physical processing of water in hit-and-run collisions of planetary embryos

    Science.gov (United States)

    Burger, C.; Maindl, T. I.; Schäfer, C. M.

    2018-01-01

    Collisions between large, similar-sized bodies are believed to shape the final characteristics and composition of terrestrial planets. Their inventories of volatiles such as water are either delivered or at least significantly modified by such events. Besides the transition from accretion to erosion with increasing impact velocity, similar-sized collisions can also result in hit-and-run outcomes for sufficiently oblique impact angles and large enough projectile-to-target mass ratios. We study volatile transfer and loss focusing on hit-and-run encounters by means of smooth particle hydrodynamics simulations, including all main parameters: impact velocity, impact angle, mass ratio and also the total colliding mass. We find a broad range of overall water losses, up to 75% in the most energetic hit-and-run events, and confirm the much more severe consequences for the smaller body also for stripping of volatile layers. Transfer of water between projectile and target inventories is found to be mostly rather inefficient, and final water contents are dominated by pre-collision inventories reduced by impact losses, for similar pre-collision water mass fractions. Comparison with our numerical results shows that current collision outcome models are not accurate enough to reliably predict these composition changes in hit-and-run events. To also account for non-mechanical losses, we estimate the amount of collisionally vaporized water over a broad range of masses and find that these contributions are particularly important in collisions of ˜ Mars-sized bodies, with sufficiently high impact energies, but still relatively low gravity. Our results clearly indicate that the cumulative effect of several (hit-and-run) collisions can efficiently strip protoplanets of their volatile layers, especially the smaller body, as it might be common, e.g., for Earth-mass planets in systems with Super-Earths. An accurate model for stripping of volatiles that can be included in future planet

  20. Electrolytic reduction runs of 0.6 kg scale-simulated oxide fuel in a Li{sub 2}O-LiCl molten salt using metal anode shrouds

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Eun-Young, E-mail: eychoi@kaeri.re.kr [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jeong; Heo, Dong Hyun; Lee, Sang Kwon [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Jeon, Min Ku [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Department of Quantum Energy Chemical Engineering, University of Science and Technology, Gajeong-ro 217, Yuseong-gu, Daejeon 34113 (Korea, Republic of); Hong, Sun Seok [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Kim, Sung-Wook [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Department of Quantum Energy Chemical Engineering, University of Science and Technology, Gajeong-ro 217, Yuseong-gu, Daejeon 34113 (Korea, Republic of); Kang, Hyun Woo; Jeon, Sang-Chae; Hur, Jin-Mok [Korea Atomic Energy Research Institute, Daedoek-daero 989-111, Yuseong-gu, Daejeon 34057 (Korea, Republic of)

    2017-06-15

    Ten electrolytic reduction or oxide reduction (OR) runs of a 0.6 kg scale-simulated oxide fuel in a Li{sub 2}O-LiCl molten salt at 650 °C were conducted using metal anode shrouds. During this procedure, an anode shroud surrounds a platinum anode and discharges hot oxygen gas from the salt to outside of the OR apparatus, thereby preventing corrosion of the apparatus. In this study, a number of anode shrouds made of various metals were tested. Each metallic anode shroud consisted of a lower porous shroud for the salt phase and an upper nonporous shroud for the gas phase. A stainless steel (STS) wire mesh with five-ply layer was a material commonly used for the lower porous shroud for the OR runs. The metals tested for the upper nonporous shroud in the different OR runs are STS, nickel, and platinum- or silver-lined nickel. The lower porous shroud showed no significant damage during two consecutive OR runs, but exhibited signs of damage from three or more runs due to thermal stress. The upper nonporous shrouds made up of either platinum- or silver-lined nickel showed excellent corrosion resistance to hot oxygen gas while STS or nickel without any platinum or silver lining exhibited poor corrosion resistance. - Highlights: •Electrolytic reduction runs of a 0.6 kg scale-simulated oxide fuel in a Li{sub 2}O-LiCl molten salt at 650 °C were conducted using metal anode shrouds. •Each metallic anode shroud consisted of a lower porous shroud for the salt phase and an upper nonporous shroud for the gas phase. •The upper nonporous shrouds made up of noble metal-lined nickel showed excellent corrosion resistance to hot oxygen gas.

  1. Combined reservoir simulation and seismic technology, a new approach for modeling CHOPS

    Energy Technology Data Exchange (ETDEWEB)

    Aghabarati, H.; Lines, L.; Settari, A. [Calgary Univ., AB (Canada); Dumitrescu, C. [Sensor Geophysical Ltd., Calgary, AB (Canada)

    2008-10-15

    One of the primary recovery schemes for developing heavy oil reservoirs in Canada is cold heavy oil production with sand (CHOPS). With the introduction of progressive cavity pumps, CHOPS can be applied in unconsolidated or weakly consolidated formations. In order to better understand reservoir properties and recovery mechanism, this paper discussed the use of a combined reservoir simulation and seismic technology that were applied for a heavy oil reservoir situated in Saskatchewan, Canada. Using a seismic survey acquired in 1989, the study used geostatistical methods to estimate the initial reservoir porosity. Sand production was then modeled using an erosional velocity approach and the model was run based on oil production. The paper also compared the results of true porosity derived from simulation against the porosity estimated from a second seismic survey acquired in 2001. Last, the extent and the shape of the enhanced permeability region was modelled in order to estimate porosity distribution. It was concluded that the performance of the CHOPS wells depended greatly on the rate of creation of the high permeability zone around the wells. 9 refs., 2 tabs., 18 figs., 1 appendix.

  2. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  3. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    Science.gov (United States)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  4. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. A review of mathematical modeling and simulation of controlled-release fertilizers.

    Science.gov (United States)

    Irfan, Sayed Ameenuddin; Razali, Radzuan; KuShaari, KuZilati; Mansor, Nurlidia; Azeem, Babar; Ford Versypt, Ashlee N

    2018-02-10

    Nutrients released into soils from uncoated fertilizer granules are lost continuously due to volatilization, leaching, denitrification, and surface run-off. These issues have caused economic loss due to low nutrient absorption efficiency and environmental pollution due to hazardous emissions and water eutrophication. Controlled-release fertilizers (CRFs) can change the release kinetics of the fertilizer nutrients through an abatement strategy to offset these issues by providing the fertilizer content in synchrony with the metabolic needs of the plants. Parametric analysis of release characteristics of CRFs is of paramount importance for the design and development of new CRFs. However, the experimental approaches are not only time consuming, but they are also cumbersome and expensive. Scientists have introduced mathematical modeling techniques to predict the release of nutrients from the CRFs to elucidate fundamental understanding of the dynamics of the release processes and to design new CRFs in a shorter time and with relatively lower cost. This paper reviews and critically analyzes the latest developments in the mathematical modeling and simulation techniques that have been reported for the characteristics and mechanisms of nutrient release from CRFs. The scope of this review includes the modeling and simulations techniques used for coated, controlled-release fertilizers. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Weak simulated extratropical responses to complete tropical deforestation

    Science.gov (United States)

    Findell, K.L.; Knutson, T.R.; Milly, P.C.D.

    2006-01-01

    The Geophysical Fluid Dynamics Laboratory atmosphere-land model version 2 (AM2/LM2) coupled to a 50-m-thick slab ocean model has been used to investigate remote responses to tropical deforestation. Magnitudes and significance of differences between a control run and a deforested run are assessed through comparisons of 50-yr time series, accounting for autocorrelation and field significance. Complete conversion of the broadleaf evergreen forests of South America, central Africa, and the islands of Oceania to grasslands leads to highly significant local responses. In addition, a broad but mild warming is seen throughout the tropical troposphere (deforested run and the control run are similar in magnitude and area to the differences between nonoverlapping segments of the control run. These simulations suggest that extratropical responses to complete tropical deforestation are unlikely to be distinguishable from natural climate variability.

  7. Control of propulsion and body lift during the first two stances of sprint running: a simulation study.

    Science.gov (United States)

    Debaere, Sofie; Delecluse, Christophe; Aerenhouts, Dirk; Hagman, Friso; Jonkers, Ilse

    2015-01-01

    The aim of this study was to relate the contribution of lower limb joint moments and individual muscle forces to the body centre of mass (COM) vertical and horizontal acceleration during the initial two steps of sprint running. Start performance of seven well-trained sprinters was recorded using an optoelectronic motion analysis system and two force plates. Participant-specific torque-driven and muscle-driven simulations were conducted in OpenSim to quantify, respectively, the contributions of the individual joints and muscles to body propulsion and lift. The ankle is the major contributor to both actions during the first two stances, with an even larger contribution in the second compared to the first stance. Biarticular gastrocnemius is the main muscle contributor to propulsion in the second stance. The contribution of the hip and knee depends highly on the position of the athlete: During the first stance, where the athlete runs in a forward bending position, the knee contributes primarily to body lift and the hip contributes to propulsion and body lift. In conclusion, a small increase in ankle power generation seems to affect the body COM acceleration, whereas increases in hip and knee power generation tend to affect acceleration less.

  8. Advances in social simulation 2015

    CERN Document Server

    Verbrugge, Rineke; Flache, Andreas; Roo, Gert; Hoogduin, Lex; Hemelrijk, Charlotte

    2017-01-01

    This book highlights recent developments in the field, presented at the Social Simulation 2015 conference in Groningen, The Netherlands. It covers advances both in applications and methods of social simulation. Societal issues addressed range across complexities in economic systems, opinion dynamics and civil violence, changing mobility patterns, different land-use, transition in the energy system, food production and consumption, ecosystem management and historical processes. Methodological developments cover how to use empirical data in validating models in general, formalization of behavioral theory in agent behavior, construction of artificial populations for experimentation, replication of models, and agent-based models that can be run in a web browser. Social simulation is a rapidly evolving field. Social scientists are increasingly interested in social simulation as a tool to tackle the complex non-linear dynamics of society. Furthermore, the software and hardware tools available for social simulation ...

  9. Simulations of NLC formation using a microphysical model driven by three-dimensional dynamics

    Science.gov (United States)

    Kirsch, Annekatrin; Becker, Erich; Rapp, Markus; Megner, Linda; Wilms, Henrike

    2014-05-01

    Noctilucent clouds (NLCs) represent an optical phenomenon occurring in the polar summer mesopause region. These clouds have been known since the late 19th century. Current physical understanding of NLCs is based on numerous observational and theoretical studies, in recent years especially observations from satellites and by lidars from ground. Theoretical studies based on numerical models that simulate NLCs with the underlying microphysical processes are uncommon. Up to date no three-dimensional numerical simulations of NLCs exist that take all relevant dynamical scales into account, i.e., from the planetary scale down to gravity waves and turbulence. Rather, modeling is usually restricted to certain flow regimes. In this study we make a more rigorous attempt and simulate NLC formation in the environment of the general circulation of the mesopause region by explicitly including gravity waves motions. For this purpose we couple the Community Aerosol and Radiation Model for Atmosphere (CARMA) to gravity-wave resolving dynamical fields simulated beforehand with the Kuehlungsborn Mechanistic Circulation Model (KMCM). In our case, the KMCM is run with a horizontal resolution of T120 which corresponds to a minimum horizontal wavelength of 350 km. This restriction causes the resolved gravity waves to be somewhat biased to larger scales. The simulated general circulation is dynamically controlled by these waves in a self-consitent fashion and provides realistic temperatures and wind-fields for July conditions. Assuming a water vapor mixing ratio profile in agreement with current observations results in reasonable supersaturations of up to 100. In a first step, CARMA is applied to a horizontal section covering the Northern hemisphere. The vertical resolution is 120 levels ranging from 72 to 101 km. In this paper we will present initial results of this coupled dynamical microphysical model focussing on the interaction of waves and turbulent diffusion with NLC-microphysics.

  10. Spontaneous appetence for wheel-running: a model of dependency on physical activity in rat.

    Science.gov (United States)

    Ferreira, Anthony; Lamarque, Stéphanie; Boyer, Patrice; Perez-Diaz, Fernando; Jouvent, Roland; Cohen-Salmon, Charles

    2006-12-01

    According to human observations of a syndrome of physical activity dependence and its consequences, we tried to examine if running activity in a free activity paradigm, where rats had a free access to activity wheel, may present a valuable animal model for physical activity dependence and most generally to behavioral dependence. The pertinence of reactivity to novelty, a well-known pharmacological dependence predictor was also tested. Given the close linkage observed in human between physical activity and drugs use and abuse, the influence of free activity in activity wheels on reactivity to amphetamine injection and reactivity to novelty were also assessed. It appeared that (1) free access to wheel may be used as a valuable model for physical activity addiction, (2) two populations differing in activity amount also differed in dependence to wheel-running. (3) Reactivity to novelty did not appeared as a predictive factor for physical activity dependence (4) activity modified novelty reactivity and (5) subjects who exhibited a high appetence to wheel-running, presented a strong reactivity to amphetamine. These results propose a model of dependency on physical activity without any pharmacological intervention, and demonstrate the existence of individual differences in the development of this addiction. In addition, these data highlight the development of a likely vulnerability to pharmacological addiction after intense and sustained physical activity, as also described in man. This model could therefore prove pertinent for studying behavioral dependencies and the underlying neurobiological mechanisms. These results may influence the way psychiatrists view behavioral dependencies and phenomena such as doping in sport or addiction to sport itself.

  11. Design of a multi-agent hydroeconomic model to simulate a complex human-water system: Early insights from the Jordan Water Project

    Science.gov (United States)

    Yoon, J.; Klassert, C. J. A.; Lachaut, T.; Selby, P. D.; Knox, S.; Gorelick, S.; Rajsekhar, D.; Tilmant, A.; Avisse, N.; Harou, J. J.; Gawel, E.; Klauer, B.; Mustafa, D.; Talozi, S.; Sigel, K.

    2015-12-01

    Our work focuses on development of a multi-agent, hydroeconomic model for purposes of water policy evaluation in Jordan. The model adopts a modular approach, integrating biophysical modules that simulate natural and engineered phenomena with human modules that represent behavior at multiple levels of decision making. The hydrologic modules are developed using spatially-distributed groundwater and surface water models, which are translated into compact simulators for efficient integration into the multi-agent model. For the groundwater model, we adopt a response matrix method approach in which a 3-dimensional MODFLOW model of a complex regional groundwater system is converted into a linear simulator of groundwater response by pre-processing drawdown results from several hundred numerical simulation runs. Surface water models for each major surface water basin in the country are developed in SWAT and similarly translated into simple rainfall-runoff functions for integration with the multi-agent model. The approach balances physically-based, spatially-explicit representation of hydrologic systems with the efficiency required for integration into a complex multi-agent model that is computationally amenable to robust scenario analysis. For the multi-agent model, we explicitly represent human agency at multiple levels of decision making, with agents representing riparian, management, supplier, and water user groups. The agents' decision making models incorporate both rule-based heuristics as well as economic optimization. The model is programmed in Python using Pynsim, a generalizable, open-source object-oriented code framework for modeling network-based water resource systems. The Jordan model is one of the first applications of Pynsim to a real-world water management case study. Preliminary results from a tanker market scenario run through year 2050 are presented in which several salient features of the water system are investigated: competition between urban and

  12. PhreeqcRM: A reaction module for transport simulators based on the geochemical model PHREEQC

    Science.gov (United States)

    Parkhurst, David L.; Wissmeier, Laurin

    2015-01-01

    PhreeqcRM is a geochemical reaction module designed specifically to perform equilibrium and kinetic reaction calculations for reactive transport simulators that use an operator-splitting approach. The basic function of the reaction module is to take component concentrations from the model cells of the transport simulator, run geochemical reactions, and return updated component concentrations to the transport simulator. If multicomponent diffusion is modeled (e.g., Nernst–Planck equation), then aqueous species concentrations can be used instead of component concentrations. The reaction capabilities are a complete implementation of the reaction capabilities of PHREEQC. In each cell, the reaction module maintains the composition of all of the reactants, which may include minerals, exchangers, surface complexers, gas phases, solid solutions, and user-defined kinetic reactants.PhreeqcRM assigns initial and boundary conditions for model cells based on standard PHREEQC input definitions (files or strings) of chemical compositions of solutions and reactants. Additional PhreeqcRM capabilities include methods to eliminate reaction calculations for inactive parts of a model domain, transfer concentrations and other model properties, and retrieve selected results. The module demonstrates good scalability for parallel processing by using multiprocessing with MPI (message passing interface) on distributed memory systems, and limited scalability using multithreading with OpenMP on shared memory systems. PhreeqcRM is written in C++, but interfaces allow methods to be called from C or Fortran. By using the PhreeqcRM reaction module, an existing multicomponent transport simulator can be extended to simulate a wide range of geochemical reactions. Results of the implementation of PhreeqcRM as the reaction engine for transport simulators PHAST and FEFLOW are shown by using an analytical solution and the reactive transport benchmark of MoMaS.

  13. The ATLAS Trigger Simulation with Legacy Software

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Physics analyses at the LHC which search for rare physics processes or measure Standard Model parameters with high precision require accurate simulations of the detector response and the event selection processes. The accurate simulation of the trigger response is crucial for determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, generally the most recent software releases are used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, the same software release with which real data were taken should be ideally used. This requires potentially running with software dating many years back, the so-called legacy software. Therefore having a strategy for running legacy software in a modern environment becomes essential when data simulated for past years start to present a sizeable fraction of the total. The requirements and possibilities for such a simulatio...

  14. Modelling of Muscle Force Distributions During Barefoot and Shod Running

    Directory of Open Access Journals (Sweden)

    Sinclair Jonathan

    2015-09-01

    Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  15. The Run 2 ATLAS Analysis Event Data Model

    CERN Document Server

    SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A

    2014-01-01

    During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...

  16. LOFT Engineering Simulator

    International Nuclear Information System (INIS)

    Venhuizen, J.R.

    1982-02-01

    The LOFT Engineering Simulator was developed to supply plant equivalent data for evaluating graphic aids and advanced control concepts for nuclear plant operators. The Simulator, a combination of hardware and software, combines some of the features of best estimate (safety analysis) computer codes with reactor operator training simulators. The LOFT Engineering Simulator represents an attempt to develop a simulation with sufficient physical detail (solution of the conservation equations) for moderate accident simulation, but which will still run in real time and provide an interface for the operator to interact with the model. As a result of this combination, a real time simulation of the LOFT plant has been developed which yields realistic transient results. These data can be used for evaluating reactor control room aids such as Safety Parameter Displays and Janus Predictive Displays

  17. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    Science.gov (United States)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson

  18. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  19. Simulating gas-aerosol-cirrus interactions: Process-oriented microphysical model and applications

    Directory of Open Access Journals (Sweden)

    B. Kärcher

    2003-01-01

    Full Text Available This work describes a process-oriented, microphysical-chemical model to simulate the formation and evolution of aerosols and ice crystals under the conditions prevailing in the upper troposphere and lower stratosphere. The model can be run as a box model or along atmospheric trajectories, and considers mixing, gas phase chemistry of aerosol precursors, binary homogeneous aerosol nucleation, homogeneous and heterogeneous ice nucleation, coagulation, condensation and dissolution, gas retention during particle freezing, gas trapping in growing ice crystals, and reverse processes. Chemical equations are solved iteratively using a second order implicit integration method. Gas-particle interactions and coagulation are treated over various size structures, with fully mass conserving and non-iterative numerical solution schemes. Particle types include quinternary aqueous solutions composed of H2SO4, HNO3, HCl, and HBr with and without insoluble components, insoluble aerosol particles, and spherical or columnar ice crystals deriving from each aerosol type separately. Three case studies are discussed in detail to demonstrate the potential of the model to simulate real atmospheric processes and to highlight current research topics concerning aerosol and cirrus formation near the tropopause. Emphasis is placed on how the formation of cirrus clouds and the scavenging of nitric acid in cirrus depends on small-scale temperature fluctuations and the presence of efficient ice nuclei in the tropopause region, corroborating and partly extending the findings of previous studies.

  20. Threshold effects on renormalization group running of neutrino parameters in the low-scale seesaw model

    International Nuclear Information System (INIS)

    Bergstroem, Johannes; Ohlsson, Tommy; Zhang He

    2011-01-01

    We show that, in the low-scale type-I seesaw model, renormalization group running of neutrino parameters may lead to significant modifications of the leptonic mixing angles in view of so-called seesaw threshold effects. Especially, we derive analytical formulas for radiative corrections to neutrino parameters in crossing the different seesaw thresholds, and show that there may exist enhancement factors efficiently boosting the renormalization group running of the leptonic mixing angles. We find that, as a result of the seesaw threshold corrections to the leptonic mixing angles, various flavor symmetric mixing patterns (e.g., bi-maximal and tri-bimaximal mixing patterns) can be easily accommodated at relatively low energy scales, which is well within the reach of running and forthcoming experiments (e.g., the LHC).

  1. Managed Readiness Simulator (MARS) V2: Implementation of the Managed Readiness Model

    Science.gov (United States)

    2010-12-01

    The SRDB architecture is described in detail in [6]. Database VBA VBA Runtime Data Sub run() SQL (“UPDATE Table SET Rank = 5”) Run_query_obj...FilterResources”) Algorithms ( VBA & SQL ) End Sub Arena Process Logic Figure 5: MARS V2 simulation architecture. The MARS managed readiness...database layer below it. Using VBA blocks, the algorithm layer can execute complex data operations on the database layer using SQL and can return

  2. A bobsleigh simulator software

    Energy Technology Data Exchange (ETDEWEB)

    Rempfler, Georg S., E-mail: georg.rempfler@alumni.ethz.ch [ETH Zurich, CLA G23.3, IMES—Center of Mechanics (Switzerland); Glocker, Christoph, E-mail: glocker@imes.mavt.ethz.ch [ETH Zurich, CLA J23.1, IMES—Center of Mechanics (Switzerland)

    2016-03-15

    This paper presents a model of the artificial ice track in Whistler, Canada that is based on its construction data, and a model of a two-men bobsleigh consisting of nine rigid bodies, having 13 degrees of freedom and incorporating 17 hard frictional contacts. These models are implemented within a simulator that is capable of performing accurate real time simulations of piloted runs on commonly available PC hardware. The simulation is verified against the results of the official two-men race that took place during the Olympic Winter Games in 2010. The simulator has been used by several professional Swiss pilots during their preparation for the 2014 Olympic Winter Games in Sochi, Russia. The simulator is exploited to analyse and judge the range of possible driving lines regarding speed and runtime improvements. It could also serve to consult track designers about safety issues and sleigh constructors about the expected dynamics on a track.

  3. A bobsleigh simulator software

    International Nuclear Information System (INIS)

    Rempfler, Georg S.; Glocker, Christoph

    2016-01-01

    This paper presents a model of the artificial ice track in Whistler, Canada that is based on its construction data, and a model of a two-men bobsleigh consisting of nine rigid bodies, having 13 degrees of freedom and incorporating 17 hard frictional contacts. These models are implemented within a simulator that is capable of performing accurate real time simulations of piloted runs on commonly available PC hardware. The simulation is verified against the results of the official two-men race that took place during the Olympic Winter Games in 2010. The simulator has been used by several professional Swiss pilots during their preparation for the 2014 Olympic Winter Games in Sochi, Russia. The simulator is exploited to analyse and judge the range of possible driving lines regarding speed and runtime improvements. It could also serve to consult track designers about safety issues and sleigh constructors about the expected dynamics on a track.

  4. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  5. Capturing flood-to-drought transitions in regional climate model simulations

    Science.gov (United States)

    Anders, Ivonne; Haslinger, Klaus; Hofstätter, Michael; Salzmann, Manuela; Resch, Gernot

    2017-04-01

    In previous studies atmospheric cyclones have been investigated in terms of related precipitation extremes in Central Europe. Mediterranean (Vb-like) cyclones are of special relevance as they are frequently related to high atmospheric moisture fluxes leading to floods and landslides in the Alpine region. Another focus in this area is on droughts, affecting soil moisture and surface and sub-surface runoff as well. Such events develop differently depending on available pre-saturation of water in the soil. In a first step we investigated two time periods which encompass a flood event and a subsequent drought on very different time scales, one long lasting transition (2002/2003) and a rather short one between May and August 2013. In a second step we extended the investigation to the long time period 1950-2016. We focused on high spatial and temporal scales and assessed the currently achievable accuracy in the simulation of the Vb-events on one hand and following drought events on the other hand. The state-of-the-art regional climate model CCLM is applied in hindcast-mode simulating the single events described above, but also the time from 1948 to 2016 to evaluate the results from the short runs to be valid for the long time period. Besides the conventional forcing of the regional climate model at its lateral boundaries, a spectral nudging technique is applied. The simulations covering the European domain have been varied systematically different model parameters. The resulting precipitation amounts have been compared to E-OBS gridded European precipitation data set and a recent high spatially resolved precipitation data set for Austria (GPARD-6). For the drought events the Standardized Precipitation Evapotranspiration Index (SPEI), soil moisture and runoff has been investigated. Varying the spectral nudging setup helps us to understand the 3D-processes during these events, but also to identify model deficiencies. To improve the simulation of such events in the past

  6. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  7. Cosimo: a cognitive simulation model of human decision making and behaviour in complex work environments

    International Nuclear Information System (INIS)

    Cacciabue, P.C.; Decortis, F.; Nordvik, J.P.; Drozdowicz, B.; Masson, M.

    1992-01-01

    In this paper the Cognitive Simulation Model (COSIMO), currently implemented at the Ispra JRC, is described, with particular emphasis on its theoretical foundations, on its computational implementation and on a number of simulations cases of man-machine system interactions. COSIMO runs on a lisp machine and it interacts with the simulation of the physical system implemented on a Sun computer. In our case the physical system is a typical Nuclear Power Plant subsystem - the Auxiliary Feed-Water System (AFWS). One basic application is to explore human behaviour in simulated accident situations in order to identify suitable safety recommendations. To be more specific, COSIMO can be used to: - analyse how operators are likely to act given a particular context, - identify difficult problem solving situations, given problem solving resources and constraints (operator knowledge, man-machine interfaces, procedures), - identify situations that can lead to human errors and evaluate their consequences, - identify and test conditions for error recovery, - investigate the effects of changes in the man-machine system. Since the modelling of the AFWS, its control system and procedures have also been the object of a detailed description (Cacciabue et al., 1990a), the objective of this paper is the presentation of the state of the art of the COSIMO simulation

  8. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  9. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  10. Performance evaluation and financial market runs

    NARCIS (Netherlands)

    Wagner, W.B.

    2013-01-01

    This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less

  11. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  12. Wave Run-up on the Zeebrugge Rubble Mound Breakwater

    DEFF Research Database (Denmark)

    De Rouck, Julien; Van de Walle, Björn; Troch, Peter

    2007-01-01

    A clear difference between full-scale wave run-up measurements and small-scale model test results had been noticed during a MAST II project. This finding initiated a thorough study of wave run-up through the European MAST III OPTICREST project. Full-scale measurement have been carried out...... on the Zeebrugge rubble mound breakwater. This breakwater has been modeled in three laboratories: two 2D models at a scale of 1:30 and one 3D model at a scale of 1:40 have been buildt at Flanders Hydraulics (Belgium), at Universidad Politécnica de Valencia (Spain), and at Aalborg University (Denmark). Wave run......-up has been measured by a digital run-up gauge. This gauge has proven to measure wave run-up more accurately than the traditional wire gauge. Wave spectra measured in Zeebrugge have been reproduced in the laboratories. Results of small-scale model tests and full-scale measurements results have been...

  13. Long-Run Neutrality and Superneutrality in an ARIMA Framework.

    OpenAIRE

    Fisher, Mark E; Seater, John J

    1993-01-01

    The authors formalize long-run neutrality and long-run superneutrality in the context of a bivariate ARIMA model; show how the restrictions implied by long-run neutrality and long-run superneutrality depend on the orders of integration of the variables; apply their analysis to previous work, showing how that work is related to long-run neutrality and long-run superneutrality; and provide some new evidence on long-run neutrality and long-run superneutrality. Copyright 1993 by American Economic...

  14. Design and simulation of a control system for a run-off-river power plant; Entwurf und Simulation einer Staustufenregelung

    Energy Technology Data Exchange (ETDEWEB)

    Ott, C.

    2000-07-01

    In run-off-river plants with low discharge and under head-control, changes of inflow lead to amplified changes of outflow. In this thesis a frequency-domain-based design-procedure is introduced, which allows to add an inflow-dependent signal to the head-controller of conventional combined head- and flow-controllers. This efficiently minimizes the discharge amplification. The non-linearity of the channel-reach is taken into consideration by adapting the settings of the controller to the actual discharge. The development of a time-domain-based program system, taking into account all nonlinearities of a run-off-river-plant, is described. Using different test-functions, the capability of the improved combined head- and flow-control can be demonstrated. In both the time- and the frequency-domain it is shown, that the quality of control is not influenced to a significant extent by the inevitable inaccuracies in the description of the channel-reach and in the measurement of the actual inflow and outflow. (orig.) [German] Die Arbeit bietet eine Loesung fuer das Problem, dass im Niedrigwasserbereich wasserstandsgeregelter Staustufen Zuflussaenderungen durch die Staustufe verstaerkt an den Unterlieger weitergegeben werden. Als Problemloesung wird ein frequenzbereichsgestuetztes Entwurfsverfahren vorgestellt, mit dem die gebraeuchliche OW-Q-Regelung um eine zuflussabhaengige Aufschaltung auf den Pegelregler erweitert werden kann. Zusammen mit der Aufschaltung des Zuflusses auf den Abflussregler wird damit die Durchflussverstaerkung deutlich reduziert. Die Nichtlinearitaet der Regelstrecke 'Stauraum' wird durch eine Parameteradaption an den Staustufendurchfluss beruecksichtigt. Weiterhin wird die Entwicklung eines Programmsystems zur nichtlinearen Simulation einer Staustufenkette im Zeitbereich beschrieben. Damit kann anhand verschiedener Lastfaelle die Leistungsfaehigkeit der verbesserten OW-Q-Regelung nachgewiesen werden. Es wird im Zeit- und Frequenzbereich

  15. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    Science.gov (United States)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  16. Electrolytic reduction runs of 0.6 kg scale-simulated oxide fuel in a Li2O-LiCl molten salt using metal anode shrouds

    Science.gov (United States)

    Choi, Eun-Young; Lee, Jeong; Heo, Dong Hyun; Lee, Sang Kwon; Jeon, Min Ku; Hong, Sun Seok; Kim, Sung-Wook; Kang, Hyun Woo; Jeon, Sang-Chae; Hur, Jin-Mok

    2017-06-01

    Ten electrolytic reduction or oxide reduction (OR) runs of a 0.6 kg scale-simulated oxide fuel in a Li2O-LiCl molten salt at 650 °C were conducted using metal anode shrouds. During this procedure, an anode shroud surrounds a platinum anode and discharges hot oxygen gas from the salt to outside of the OR apparatus, thereby preventing corrosion of the apparatus. In this study, a number of anode shrouds made of various metals were tested. Each metallic anode shroud consisted of a lower porous shroud for the salt phase and an upper nonporous shroud for the gas phase. A stainless steel (STS) wire mesh with five-ply layer was a material commonly used for the lower porous shroud for the OR runs. The metals tested for the upper nonporous shroud in the different OR runs are STS, nickel, and platinum- or silver-lined nickel. The lower porous shroud showed no significant damage during two consecutive OR runs, but exhibited signs of damage from three or more runs due to thermal stress. The upper nonporous shrouds made up of either platinum- or silver-lined nickel showed excellent corrosion resistance to hot oxygen gas while STS or nickel without any platinum or silver lining exhibited poor corrosion resistance.

  17. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  18. Finite element modelling of Plantar Fascia response during running on different surface types

    Science.gov (United States)

    Razak, A. H. A.; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Hashim, M. S. M.; Daud, R.

    2017-10-01

    Plantar fascia is a ligament found in human foot structure located beneath the skin of human foot that functioning to stabilize longitudinal arch of human foot during standing and normal gait. To perform direct experiment on plantar fascia seems very difficult since the structure located underneath the soft tissue. The aim of this study is to develop a finite element (FE) model of foot with plantar fascia and investigate the effect of the surface hardness on biomechanical response of plantar fascia during running. The plantar fascia model was developed using Solidworks 2015 according to the bone structure of foot model that was obtained from Turbosquid database. Boundary conditions were set out based on the data obtained from experiment of ground reaction force response during running on different surface hardness. The finite element analysis was performed using Ansys 14. The results found that the peak of stress and strain distribution were occur on the insertion of plantar fascia to bone especially on calcaneal area. Plantar fascia became stiffer with increment of Young’s modulus value and was able to resist more loads. Strain of plantar fascia was decreased when Young’s modulus increased with the same amount of loading.

  19. An intelligent dynamic simulation environment: An object-oriented approach

    International Nuclear Information System (INIS)

    Robinson, J.T.; Kisner, R.A.

    1988-01-01

    This paper presents a prototype simulation environment for nuclear power plants which illustrates the application of object-oriented programming to process simulation. Systems are modeled using this technique as a collection of objects which communicate via message passing. The environment allows users to build simulation models by selecting iconic representations of plant components from a menu and connecting them with the aid of a mouse. Models can be modified graphically at any time, even as the simulation is running, and the results observed immediately via real-time graphics. This prototype illustrates the use of object-oriented programming to create a highly interactive and automated simulation environment. 9 refs., 4 figs

  20. NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML

    Science.gov (United States)

    Stueber, Thomas J.; Paxson, Daniel E.

    2014-01-01

    The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.

  1. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  2. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  3. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  4. Simulation Model of Mobile Detection Systems

    International Nuclear Information System (INIS)

    Edmunds, T.; Faissol, D.; Yao, Y.

    2009-01-01

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains

  5. Simulation Model of Mobile Detection Systems

    Energy Technology Data Exchange (ETDEWEB)

    Edmunds, T; Faissol, D; Yao, Y

    2009-01-27

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains

  6. Integration of control and building performance simulation software by run-time coupling

    NARCIS (Netherlands)

    Yahiaoui, A.; Hensen, J.L.M.; Soethout, L.L.

    2003-01-01

    This paper presents the background, approach and initial results of a project, which aims to achieve better integrated building and systems control modeling in building performance simulation by runtime coupling of distributed computer programs. This paper focuses on one of the essential steps

  7. JAPC Compact Simulator evolution to latest integration

    International Nuclear Information System (INIS)

    Nabeta, T.; Nakayama, Y.

    1999-01-01

    This paper describes the evolution of JAPC compact simulator from the first installation in 1988 until recent integration with SIMULATE-3 engineering code core model and extended simulation for Mid-loop operation and severe accidents. JAPC Compact Simulator has an advanced super compact rotating panel design. Three plants, Tokai 2 (GE BWR 5), Tsuruga 1 (GE BWR 2), Tsuruga 2 (MHI PWR 4-Loop) are simulated. The simulator has been used for training of operator and engineering personnel, and has continuously been upgraded to follow normal plant modifications as well as development in modeling and computer technology. The integration of SIMULATE-3 core model is, to our knowledge, the first integration of a real design code into a training simulator. SIMULATE-3 has been successfully integrated into the simulator and run in real time, without compromising the accuracy of SIMULATE-3. The code has been modified to also handle mid-loop operation and severe accidents. (author)

  8. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    SHIBASAKI; Ryosuke

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is pre- sented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users’ decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions, while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS) global land cover product in 2001. Both validation approaches indicated reliability of the model for ad- dressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally, the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline. The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  9. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    WU WenBin; YANG Peng; MENG ChaoYing; SHIBASAKI Ryosuke; ZHOU QingBo; TANG HuaJun; SHI Yun

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is presented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users' decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions,while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS)global land cover product in 2001. Both validation approaches indicated reliability of the model for addressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally,the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline.The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  10. Global ice sheet/RSL simulations using the higher-order Ice Sheet System Model.

    Science.gov (United States)

    Larour, E. Y.; Ivins, E. R.; Adhikari, S.; Schlegel, N.; Seroussi, H. L.; Morlighem, M.

    2017-12-01

    Relative sea-level rise is driven by processes that are intimately linked to the evolution ofglacial areas and ice sheets in particular. So far, most Earth System models capable of projecting theevolution of RSL on decadal to centennial time scales have relied on offline interactions between RSL andice sheets. In particular, grounding line and calving front dynamics have not been modeled in a way that istightly coupled with Elasto-Static Adjustment (ESA) and/or Glacial-Isostatic Adjustment (GIA). Here, we presenta new simulation of the entire Earth System in which both Greenland and Antarctica ice sheets are tightly coupledto an RSL model that includes both ESA and GIA at resolutions and time scales compatible with processes suchas grounding line dynamics for Antarctica ice shelves and calving front dynamics for Greenland marine-terminatingglaciers. The simulations rely on the Ice Sheet System Model (ISSM) and show the impact of higher-orderice flow dynamics and coupling feedbacks between ice flow and RSL. We quantify the exact impact of ESA andGIA inclusion on grounding line evolution for large ice shelves such as the Ronne and Ross ice shelves, as well asthe Agasea Embayment ice streams, and demonstate how offline vs online RSL simulations diverge in the long run,and the consequences for predictions of sea-level rise.This work was performed at the California Institute of Technology's Jet Propulsion Laboratory undera contract with the National Aeronautics and Space Administration's Cryosphere Science Program.

  11. Tuning a RANS k-e model for jet-in-crossflow simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan; DeChant, Lawrence Justin

    2013-09-01

    We develop a novel calibration approach to address the problem of predictive ke RANS simulations of jet-incrossflow. Our approach is based on the hypothesis that predictive ke parameters can be obtained by estimating them from a strongly vortical flow, specifically, flow over a square cylinder. In this study, we estimate three ke parameters, C%CE%BC, Ce2 and Ce1 by fitting 2D RANS simulations to experimental data. We use polynomial surrogates of 2D RANS for this purpose. We conduct an ensemble of 2D RANS runs using samples of (C%CE%BC;Ce2;Ce1) and regress Reynolds stresses to the samples using a simple polynomial. We then use this surrogate of the 2D RANS model to infer a joint distribution for the ke parameters by solving a Bayesian inverse problem, conditioned on the experimental data. The calibrated (C%CE%BC;Ce2;Ce1) distribution is used to seed an ensemble of 3D jet-in-crossflow simulations. We compare the ensemble's predictions of the flowfield, at two planes, to PIV measurements and estimate the predictive skill of the calibrated 3D RANS model. We also compare it against 3D RANS predictions using the nominal (uncalibrated) values of (C%CE%BC;Ce2;Ce1), and find that calibration delivers a significant improvement to the predictive skill of the 3D RANS model. We repeat the calibration using surrogate models based on kriging and find that the calibration, based on these more accurate models, is not much better that those obtained with simple polynomial surrogates. We discuss the reasons for this rather surprising outcome.

  12. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  13. Testing ATLAS Z+MET excess with LHC run 2

    International Nuclear Information System (INIS)

    Lu, Xiaochuan; Terada, Takahiro

    2016-05-01

    The ATLAS collaboration reported a 3σ excess in the search of events containing on-Z dilepton, jets, and large missing momentum (MET) in the 8 TeV LHC run. Motivated by this excess, many models of new physics have been proposed. Recently, the ATLAS and CMS collaborations reported new results for similar Z+MET channels in the 13 TeV run. In this paper, we comprehensively discuss the consistency between the proposed models and the LHC results of Run 1 and Run 2. We find that in models with heavy gluino production, there is generically some tension between the 8 TeV and 13 TeV results. On the other hand, models with light squark production provide relatively better fitting to both results.

  14. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  15. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  16. Testing the skill of numerical hydraulic modeling to simulate spatiotemporal flooding patterns in the Logone floodplain, Cameroon

    Science.gov (United States)

    Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan

    2016-08-01

    Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic

  17. ESIM_DSN Web-Enabled Distributed Simulation Network

    Science.gov (United States)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  18. Discrete simulation system based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Futo, I; Szeredi, J

    1982-01-01

    A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.

  19. The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency

    Directory of Open Access Journals (Sweden)

    John Kiely

    2017-06-01

    Full Text Available In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term ‘coordination’ is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of ‘running coordination,’ and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse

  20. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  1. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  2. Simulation for Grid Connected Wind Turbines with Fluctuating

    Science.gov (United States)

    Ye, Ying; Fu, Yang; Wei, Shurong

    This paper establishes the whole dynamic model of wind turbine generator system which contains the wind speed model and DFIG wind turbines model .A simulation sample based on the mathematical models is built by using MATLAB in this paper. Research are did on the performance characteristics of doubly-fed wind generators (DFIG) which connected to power grid with three-phase ground fault and the disturbance by gust and mixed wind. The capacity of the wind farm is 9MW which consists of doubly-fed wind generators (DFIG). Simulation results demonstrate that the three-phase ground fault occurs on grid side runs less affected on the stability of doubly-fed wind generators. However, as a power source, fluctuations of the wind speed will run a large impact on stability of double-fed wind generators. The results also show that if the two disturbances occur in the meantime, the situation will be very serious.

  3. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  4. A fast-running fuel management program for a CANDU reactor

    International Nuclear Information System (INIS)

    Choi, Hangbok

    2000-01-01

    A fast-running fuel management program for a CANDU reactor has been developed. The basic principle of this program is to select refueling channels such that the reference reactor conditions are maintained by applying several constraints and criteria when selecting refueling channels. The constraints used in this program are the channel and bundle power and the fuel burnup. The final selection of the refueling channel is determined based on the priority of candidate channels, which enhances the reactor power distribution close to the time-average model. The refueling simulation was performed for a natural uranium CANDU reactor and the results were satisfactory

  5. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    Science.gov (United States)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  6. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  7. Effects of distribution density and cell dimension of 3D vegetation model on canopy NDVI simulation base on DART

    Science.gov (United States)

    Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei

    2017-09-01

    The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.

  8. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  9. The development of fast simulation program for marine reactor parameters

    International Nuclear Information System (INIS)

    Chen Zhiyun; Hao Jianli; Chen Wenzhen

    2012-01-01

    Highlights: ► The simplified physical and mathematical models are proposed for a marine reactor system. ► A program is developed with Simulink module and Matlab file. ► The program developed has the merit of easy input preparation, output processing and fast running. ► The program can be used for the fast simulation of marine reactor parameters on the operating field. - Abstract: The fast simulation program for marine reactor parameters is developed based on the Simulink simulating software according to the characteristics of marine reactor with requirement of maneuverability and acute and fast response. The simplified core physical and thermal model, pressurizer model, steam generator model, control rod model, reactivity model and the corresponding Simulink modules are established. The whole program is developed by coupling all the Simulink modules. Two typical transient processes of marine reactor with fast load increase at low power level and load rejection at high power level are adopted to verify the program. The results are compared with those of Relap5/Mod3.2 with good consistency, and the program runs very fast. It is shown that the program is correct and suitable for the fast and accurate simulation of marine reactor parameters on the operating field, which is significant to the marine reactor safe operation.

  10. GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA

    Science.gov (United States)

    Stark, M.

    1994-01-01

    Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The

  11. Development of water movement model as a module of moisture content simulation in static pile composting.

    Science.gov (United States)

    Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko

    2012-01-01

    This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.

  12. Radius stabilization and brane running in the Randall-Sundrum type 1 model

    International Nuclear Information System (INIS)

    Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu

    2004-01-01

    We study the effective potential of a scalar field based on the 5D gauged supergravity for the Randall-Sundrum type one brane model in terms of the brane running method. The scalar couples to the brane such that the Bogomolnyi-Prasad-Sommerfield conditions are satisfied for the bulk configuration. The resulting effective potential implies that the interbrane distance is undetermined in this case, and we need a small Bogomolnyi-Prasad-Sommerfield breaking term on the brane to stabilize the interbrane distance at a finite length. We also discuss the relationship to the Goldberger-Wise model

  13. Simulation of long-term erosion on an abandoned mine site using the SIBERIA landscape evolution model

    International Nuclear Information System (INIS)

    Hancock, G.; Willgoose, G.; Evans, K.

    1999-01-01

    The SIBERIA catchment evolution model can simulate the evolution of landforms over many years as a result of runoff and erosion. This study discusses testing of the reliability of the erosion predictions of the model in a field study. Using erosion parameters calibrated from field studies of rainfall and runoff from the waste rock dump batters, the SIBERIA landscape evolution model was calibrated and then used to simulate erosion over 50 years on the abandoned Scinto 6 mine site. Scinto 6 is a former uranium mine located in the Kakadu Region, Northern Territory, Australia. The SIBERIA runs simulated the geomorphic development of the gullies on the man-made batters of the waste rock dump. The waste rock of the mine had been dumped in the characteristic pattern of a flat top and steep sided batters typical of many former and current dumps and there had been significant degradation from both sheet and gully erosion. Traditional erosion models cannot model this type of degradation because their erosion model cannot change the landform, while SIBERIA does change the landform. The gully position, depth volume and morphology on the waste rock dump were compared with that of SIBERIA simulations. The geomorphic development of the waste rock dump indicated that SIBERIA can simulate features that arise from the long-term effect of erosion and also their rate of development on a man-made post-mining landscape over periods of up to 50 years. The detailed results of this specific study will be discussed with specific discussion of the type of data required and the implications of the uncertain erosion physics on the reliability of the predictions

  14. Moving on to the modeling and simulation using computational fluid dynamics

    International Nuclear Information System (INIS)

    Norasalwa Zakaria; Rohyiza Baan; Muhd Noor Muhd Yunus

    2006-01-01

    The heat is on but not at the co-combustor plant. Using the Computational Fluid Dynamics (CFD), modeling and simulation of an incinerator has been made easy and possible from the comfort of cozy room. CFD has become an important design tool in nearly every industrial field because it provides understanding of flow patterns. CFD provide values for fluid velocity, fluid temperature, pressure and species concentrations throughout a flow domain. MINT has acquired a complete CFD software recently, consisting of GAMBIT, which is use to build geometry and meshing, and FLUENT as the processor or solver. This paper discusses on several trial runs that was carried out on several parts of the co-combustor plant namely the under fire section and the mixing chamber section

  15. The Second Student-Run Homeless Shelter

    Science.gov (United States)

    Seider, Scott C.

    2012-01-01

    From 1983-2011, the Harvard Square Homeless Shelter (HSHS) in Cambridge, Massachusetts, was the only student-run homeless shelter in the United States. However, college students at Villanova, Temple, Drexel, the University of Pennsylvania, and Swarthmore drew upon the HSHS model to open their own student-run homeless shelter in Philadelphia,…

  16. Constraints on running vacuum model with H ( z ) and f σ{sub 8}

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Yin, Lu, E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: yinlumail@foxmail.com [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China)

    2017-08-01

    We examine the running vacuum model with Λ ( H ) = 3 ν H {sup 2} + Λ{sub 0}, where ν is the model parameter and Λ{sub 0} is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H ( z ) and weighted linear growth f ( z )σ{sub 8}( z ) measurements, we find that ν=(1.37{sup +0.72}{sub −0.95})× 10{sup −4} with the best fitted χ{sup 2} value slightly smaller than that in the ΛCDM model.

  17. DNA - A Thermal Energy System Simulator

    DEFF Research Database (Denmark)

    2008-01-01

    DNA is a general energy system simulator for both steady-state and dynamic simulation. The program includes a * component model library * thermodynamic state models for fluids and solid fuels and * standard numerical solvers for differential and algebraic equation systems and is free and portable...... (open source, open use, standard FORTRAN77). DNA is text-based using whichever editor, you like best. It has been integerated with the emacs editor. This is usually available on unix-like systems. for windows we recommend the Installation instructions for windows: First install emacs and then run...... the DNA installer...

  18. Weather regimes in past climate atmospheric general circulation model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kageyama, M.; Ramstein, G. [CEA Saclay, Gif-sur-Yvette (France). Lab. des Sci. du Climat et de l' Environnement; D' Andrea, F.; Vautard, R. [Laboratoire de Meteorologie Dynamique, Ecole Normale Superieure, Paris (France); Valdes, P.J. [Department of Meteorology, University of Reading (United Kingdom)

    1999-10-01

    We investigate the climates of the present-day, inception of the last glaciation (115000 y ago) and last glacial maximum (21000 y ago) in the extratropical north Atlantic and Europe, as simulated by the laboratoire de Meteorologie dynamique atmospheric general circulation model. We use these simulations to investigate the low-frequency variability of the model in different climates. The aim is to evaluate whether changes in the intraseasonal variability, which we characterize using weather regimes, can help describe the impact of different boundary conditions on climate and give a better understanding of climate change processes. Weather regimes are defined as the most recurrent patterns in the 500 hPa geopotential height, using a clustering algorithm method. The regimes found in the climate simulations of the present-day and inception of the last glaciation are similar in their number and their structure. It is the regimes' populations which are found to be different for these climates, with an increase of the model's blocked regime and a decrease in the zonal regime at the inception of the last glaciation. This description reinforces the conclusions from a study of the differences between the climatological averages of the different runs and confirms the northeastward shift to the tail of the Atlantic storm-track, which would favour more precipitation over the site of growth of the Fennoscandian ice-sheet. On the other hand, the last glacial maximum results over this sector are not found to be classifiable, showing that the change in boundary conditions can be responsible for severe changes in the weather regime and low-frequency dynamics. The LGM Atlantic low-frequency variability appears to be dominated by a large-scale retrogressing wave with a period 40 to 50 days. (orig.)

  19. Effects of plyometric training on achilles tendon properties and shuttle running during a simulated cricket batting innings.

    Science.gov (United States)

    Houghton, Laurence A; Dawson, Brian T; Rubenson, Jonas

    2013-04-01

    The aim of this study was to determine whether intermittent shuttle running times (during a prolonged, simulated cricket batting innings) and Achilles tendon properties were affected by 8 weeks of plyometric training (PLYO, n = 7) or normal preseason (control [CON], n = 8). Turn (5-0-5-m agility) and 5-m sprint times were assessed using timing gates. Achilles tendon properties were determined using dynamometry, ultrasonography, and musculoskeletal geometry. Countermovement and squat jump heights were also assessed before and after training. Mean 5-0-5-m turn time did not significantly change in PLYO or CON (pre vs. post: 2.25 ± 0.08 vs. 2.22 ± 0.07 and 2.26 ± 0.06 vs. 2.25 ± 0.08 seconds, respectively). Mean 5-m sprint time did not significantly change in PLYO or CON (pre vs. post: 0.85 ± 0.02 vs. 0.84 ± 0.02 and 0.85 ± 0.03 vs. 0.85 ± 0.02 seconds, respectively). However, inferences from the smallest worthwhile change suggested that PLYO had a 51-72% chance of positive effects but only 6-15% chance of detrimental effects on shuttle running times. Jump heights only increased in PLYO (9.1-11.0%, p force, stiffness, elastic energy, strain, modulus) did not change in PLYO or CON. However, Achilles tendon cross-sectional area increased in PLYO (pre vs. post: 70 ± 7 vs. 79 ± 8 mm, p 0.050). In conclusion, plyometric training had possible benefits on intermittent shuttle running times and improved jump performance. Also, plyometric training increased tendon cross-sectional area, but further investigation is required to determine whether this translates to decreased injury risk.

  20. Chaotic inflation with curvaton induced running

    DEFF Research Database (Denmark)

    Sloth, Martin Snoager

    2014-01-01

    While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation of the app......While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation...... of the apparent tension, but which would be in conflict with prediction of the simplest model of chaotic inflation. The large field chaotic model is sensitive to UV physics, and the nontrivial running of the spectral index suggested by the BICEP2 collaboration could therefore, if true, be telling us some...... the possibility that the running could be due to some other less UV sensitive degree of freedom. As an example, we ask if it is possible that the curvature perturbation spectrum has a contribution from a curvaton, which makes up for the large running in the spectrum. We find that this effect could mask...

  1. Teaching Bank Runs with Classroom Experiments

    Science.gov (United States)

    Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy

    2011-01-01

    Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…

  2. INTELLIGENT DESIGN: ON THE EMULATION OF COSMOLOGICAL SIMULATIONS

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Holm, Oskar; Knox, Lloyd

    2011-01-01

    Simulation design is the choice of locations in parameter space at which simulations are to be run and is the first step in building an emulator capable of quickly providing estimates of simulation results for arbitrary locations in the parameter space. We introduce an alteration to the 'OALHS' design used by Heitmann et al. that reduces the number of simulation runs required to achieve a fixed accuracy in our case study by a factor of two. We also compare interpolation procedures for emulators and find that interpolation via Gaussian process models and via the much-easier-to-implement polynomial interpolation have comparable accuracy. A very simple emulation-building procedure consisting of a design sampled from the parameter prior distribution, combined with interpolation via polynomials also performs well. Although our primary motivation is efficient emulators of nonlinear cosmological N-body simulations, in an appendix we describe an emulator for the cosmic microwave background temperature power spectrum publicly available as a computer code.

  3. MHSS: a material handling system simulator

    Energy Technology Data Exchange (ETDEWEB)

    Pomernacki, L.; Hollstien, R.B.

    1976-04-07

    A Material Handling System Simulator (MHSS) program is described that provides specialized functional blocks for modeling and simulation of nuclear material handling systems. Models of nuclear fuel fabrication plants may be built using functional blocks that simulate material receiving, storage, transport, inventory, processing, and shipping operations as well as the control and reporting tasks of operators or on-line computers. Blocks are also provided that allow the user to observe and gather statistical information on the dynamic behavior of simulated plants over single or replicated runs. Although it is currently being developed for the nuclear materials handling application, MHSS can be adapted to other industries in which material accountability is important. In this paper, emphasis is on the simulation methodology of the MHSS program with application to the nuclear material safeguards problem. (auth)

  4. Driving Simulator Development and Performance Study

    OpenAIRE

    Juto, Erik

    2010-01-01

    The driving simulator is a vital tool for much of the research performed at theSwedish National Road and Transport Institute (VTI). Currently VTI posses three driving simulators, two high fidelity simulators developed and constructed by VTI, and a medium fidelity simulator from the German company Dr.-Ing. Reiner Foerst GmbH. The two high fidelity simulators run the same simulation software, developed at VTI. The medium fidelity simulator runs a proprietary simulation software. At VTI there is...

  5. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  6. A degradation approach to accelerate simulations to steady-state in a 3-D tracer transport model of the global ocean

    Energy Technology Data Exchange (ETDEWEB)

    Aumont, O.; Orr, J.C.; Marti, O. [CEA Saclay, Gif-sur-Yvette (France). Lab. de Modelisation du Climat et de l`Environnement; Jamous, D.; Monfray, P. [Centre des Faibles Radioactivites, Laboratoire mixte CNRS-CEA, L`Orme des Merisiers, Bt. 709/LMCE, CE Saclay, F-91191 Gif sur Yvette Cedex (France); Madec, G. [Laboratoire d`Oceanographie Dynamique et de Climatologie, (CNRS/ORSTOM/UPMC) Universite Paris VI, 4 place Jussieu, Paris (France)

    1998-02-01

    We have developed a new method to accelerate tracer simulations to steady-state in a 3D global ocean model, run off-line. Using this technique, our simulations for natural {sup 14}C ran 17 times faster when compared to those made with the standard nonaccelerated approach. For maximum acceleration we wish to initialize the model with tracer fields that are as close as possible to the final equilibrium solution. Our initial tracer fields were derived by judiciously constructing a much faster, lower-resolution (degraded), off-line model from advective and turbulent fields predicted from the parent on-line model, an ocean general circulation model (OGCM). No on-line version of the degraded model exists; it is based entirely on results from the parent OGCM. Degradation was made horizontally over sets of four adjacent grid-cell squares for each vertical layer of the parent model. However, final resolution did not suffer because as a second step, after allowing the degraded model to reach equilibrium, we used its tracer output to reinitialize the parent model (at the original resolution). After reinitialization, the parent model must then be integrated only to a few hundred years before reaching equilibrium. To validate our degradation-integration technique (DEGINT), we compared {sup 14}C results from runs with and without this approach. Differences are less than 10 permille throughout 98.5% of the ocean volume. Predicted natural {sup 14}C appears reasonable over most of the ocean. In the Atlantic, modeled {Delta}{sup 14}C indicates that as observed, the North Atlantic Deep Water (NADW) fills the deep North Atlantic, and Antartic Intermediate Water (AAIW) infiltrates northward. (orig.) With 12 figs., 1 tab., 42 refs.

  7. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot

    International Nuclear Information System (INIS)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-01-01

    We report on the development of a robot’s dynamic locomotion based on a template which fits the robot’s natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order ‘template’ in a more complex ‘anchor’, the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion. (paper)

  8. Assessing the Impact of Equipment Aging on System Performance Using Simulation Modeling Methods

    International Nuclear Information System (INIS)

    Gupta, N. K.

    2005-01-01

    The radiological Inductively Coupled Plasma Mass Spectrometer (ICP-MS) is used to analyze the radioactive samples collected from different radioactive material processing operations at Savannah River Site (SRS). The expeditious processing of these samples is important for safe and reliable operations at SRS. As the radiological (RAD) ICP-MS machine ages, the experience shows that replacement parts and repairs are difficult to obtain on time for reliable operations after 5 years of service. A discrete event model using commercial software EXTEND was prepared to assess the impact on sample turn around times as the ICP-MS gets older. The model was prepared using the sample statistics from the previous 4 years. Machine utilization rates were calculated for the new machine, 5 year old machine, 10 year old machine, and a 12 year old machine. Computer simulations were run for these periods and the sample delay times calculated. The model was validated against the sample statistics collected from the previous 4 quarters. 90% confidence intervals were calculated for the 10th, 25th, 50th, and 90th quantiles of the samples. The simulation results show that if 50% of the samples are needed on time for efficient site operations, a 10 year old machine could take nearly 50 days longer to process these samples than a 5-year old machine. This simulation effort quantifies the impact on sample turn around time as the ICP-MS gets older

  9. Large-eddy simulation of maritime deep tropical convection

    Directory of Open Access Journals (Sweden)

    Peter A Bogenschutz

    2009-12-01

    Full Text Available This study represents an attempt to apply Large-Eddy Simulation (LES resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 x 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 x 2048 x 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a trimodal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition

  10. Urban Run-off Volumes Dependency on Rainfall Measurement Method

    DEFF Research Database (Denmark)

    Pedersen, L.; Jensen, N. E.; Rasmussen, Michael R.

    2005-01-01

    Urban run-off is characterized with fast response since the large surface run-off in the catchments responds immediately to variations in the rainfall. Modeling such type of catchments is most often done with the input from very few rain gauges, but the large variation in rainfall over small areas...... resolutions and single gauge rainfall was fed to a MOUSE run-off model. The flow and total volume over the event is evaluated....

  11. RANA, a real-time multi-agent system simulator

    DEFF Research Database (Denmark)

    Jørgensen, Søren Vissing; Demazeau, Yves; Hallam, John

    2016-01-01

    for individualisation and abstraction while retaining efficiency. Events are managed by the C++ simulator core. Full run state can be recorded for post-processed visualisation or analysis. The new tool is demonstrated in three different cases: a mining robot simulation, which is purely action based; an agent......-based setup that is verifies the high precision exhibited by RANAs simulation core; and a state-based firefly-like agent simulation that models real-time responses to fellow agents' signals, in which event propagation and reception affect the result of the simulation....

  12. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  13. How realistic are air quality hindcasts driven by forcings from climate model simulations?

    Science.gov (United States)

    Lacressonnière, G.; Peuch, V.-H.; Arteta, J.; Josse, B.; Joly, M.; Marécal, V.; Saint Martin, D.; Déqué, M.; Watson, L.

    2012-12-01

    Predicting how European air quality could evolve over the next decades in the context of changing climate requires the use of climate models to produce results that can be averaged in a climatologically and statistically sound manner. This is a very different approach from the one that is generally used for air quality hindcasts for the present period; analysed meteorological fields are used to represent specifically each date and hour. Differences arise both from the fact that a climate model run results in a pure model output, with no influence from observations (which are useful to correct for a range of errors), and that in a "climate" set-up, simulations on a given day, month or even season cannot be related to any specific period of time (but can just be interpreted in a climatological sense). Hence, although an air quality model can be thoroughly validated in a "realistic" set-up using analysed meteorological fields, the question remains of how far its outputs can be interpreted in a "climate" set-up. For this purpose, we focus on Europe and on the current decade using three 5-yr simulations performed with the multiscale chemistry-transport model MOCAGE and use meteorological forcings either from operational meteorological analyses or from climate simulations. We investigate how statistical skill indicators compare in the different simulations, discriminating also the effects of meteorology on atmospheric fields (winds, temperature, humidity, pressure, etc.) and on the dependent emissions and deposition processes (volatile organic compound emissions, deposition velocities, etc.). Our results show in particular how differing boundary layer heights and deposition velocities affect horizontal and vertical distributions of species. When the model is driven by operational analyses, the simulation accurately reproduces the observed values of O3, NOx, SO2 and, with some bias that can be explained by the set-up, PM10. We study how the simulations driven by climate

  14. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  15. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  16. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  17. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  18. Short-Run Contexts and Imperfect Testing for Continuous Sampling Plans

    Directory of Open Access Journals (Sweden)

    Mirella Rodriguez

    2018-04-01

    Full Text Available Continuous sampling plans are used to ensure a high level of quality for items produced in long-run contexts. The basic idea of these plans is to alternate between 100% inspection and a reduced rate of inspection frequency. Any inspected item that is found to be defective is replaced with a non-defective item. Because not all items are inspected, some defective items will escape to the customer. Analytical formulas have been developed that measure both the customer perceived quality and also the level of inspection effort. The analysis of continuous sampling plans does not apply to short-run contexts, where only a finite-size batch of items is to be produced. In this paper, a simulation algorithm is designed and implemented to analyze the customer perceived quality and the level of inspection effort for short-run contexts. A parameter representing the effectiveness of the test used during inspection is introduced to the analysis, and an analytical approximation is discussed. An application of the simulation algorithm that helped answer questions for the U.S. Navy is discussed.

  19. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  20. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.