WorldWideScience

Sample records for model simulations run

  1. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  2. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  3. Debris flow run-out simulation and analysis using a dynamic model

    Science.gov (United States)

    Melo, Raquel; van Asch, Theo; Zêzere, José L.

    2018-02-01

    Only two months after a huge forest fire occurred in the upper part of a valley located in central Portugal, several debris flows were triggered by intense rainfall. The event caused infrastructural and economic damage, although no lives were lost. The present research aims to simulate the run-out of two debris flows that occurred during the event as well as to calculate via back-analysis the rheological parameters and the excess rain involved. Thus, a dynamic model was used, which integrates surface runoff, concentrated erosion along the channels, propagation and deposition of flow material. Afterwards, the model was validated using 32 debris flows triggered during the same event that were not considered for calibration. The rheological and entrainment parameters obtained for the most accurate simulation were then used to perform three scenarios of debris flow run-out on the basin scale. The results were confronted with the existing buildings exposed in the study area and the worst-case scenario showed a potential inundation that may affect 345 buildings. In addition, six streams where debris flow occurred in the past and caused material damage and loss of lives were identified.

  4. Development of a simulation model for compression ignition engine running with ignition improved blend

    Directory of Open Access Journals (Sweden)

    Sudeshkumar Ponnusamy Moranahalli

    2011-01-01

    Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.

  5. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  6. The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.; Lin, Alexander S.

    2016-01-01

    This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.

  7. Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...

  8. CMS Full Simulation for Run-2

    CERN Document Server

    Hildreth, M; Lange, D J; Kortelainen, M J

    2015-01-01

    During LHC shutdown between run-1 and run-2 intensive developments were carried out to improve performance of CMS simulation. For physics improvements migration from Geant4 9.4p03 to Geant4 10.0p02 has been performed. CPU performance has been improved by introduction of the Russian roulette method inside CMS calorimeters, optimization of CMS simulation sub-libraries, and usage of statics build of the simulation executable. As a result of these efforts, CMS simulation has been speeded up by about factor two. In this work we provide description of updates for different software components of CMS simulation. Development of a multi-threaded (MT) simulation approach for CMS will be also discuss.

  9. Running agent-based simulations. Unternehmensmodell / Computersimulation.

    OpenAIRE

    Meyer, David; Karatzoglou, Alexandros; Buchta, Christian; Leisch, Friedrich; Hornik, Kurt

    2001-01-01

    When running agent-based simulations using ready-made components, one usually faces heterogenity problems both for the agents' implementation and for the underlying platform. To circumvent these kind of hindrances, we introduce a wrapper technique for mapping the functionality of agents living in an interpreter-based environment to a standardized CORBA interface, thus facilitating the task for any control mechanism (like a simulation manager) which just will need to handle one set of commands...

  10. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...... statistical power analysis as a way to determine the appropriate number of runs. Two examples are then produced using results from an agent-based model. The reader is then guided through the application of this statistical technique and exposed to its limits and potentials....

  11. Humans running in place on water at simulated reduced gravity.

    Directory of Open Access Journals (Sweden)

    Alberto E Minetti

    Full Text Available BACKGROUND: On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. METHODOLOGY/PRINCIPAL FINDINGS: We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. CONCLUSIONS/SIGNIFICANCE: The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.

  12. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 2. Simulation Analysis Using the Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  13. Polarization simulations in the RHIC run 15 lattice

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Luo, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Ranjbar, V. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Robert-Demolaize, G. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; White, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2015-05-03

    RHIC polarized proton Run 15 uses a new acceleration ramp optics, compared to RHIC Run 13 and earlier runs, in relation with electron-lens beam-beam compensation developments. The new optics induces different strengths in the depolarizing snake resonance sequence, from injection to top energy. As a consequence, polarization transport along the new ramp has been investigated, based on spin tracking simulations. Sample results are reported and discussed.

  14. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    Science.gov (United States)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  15. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  16. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  17. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  18. Running vacuum cosmological models: linear scalar perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  19. Model for radionuclide transport in running waters

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)

    2005-11-15

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  20. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  1. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  2. 1-D blood flow modelling in a running human body.

    Science.gov (United States)

    Szabó, Viktor; Halász, Gábor

    2017-07-01

    In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.

  3. Machine-induced Background Simulation Studies for LHC Run 1, Run 2 and HL-LHC

    CERN Document Server

    Kwee-Hinzmann, Regina; Bruce, Roderik; Cerutti, Francesco; Esposito, Luigi Salvatore; Gibson, Stephen; Lechner, Anton; Garcia Morales, Hector; Yin Vallgren, Christina

    2017-01-01

    The study of machine-induced background to the experiments is vital for several reasons. Too much background can be an issue for operation and the difficult part is to judge when exactly “too much” is attained. It is a complex topic as experiments are directly or indirectly affected by conditions all around the LHC ring e.g. collimation settings and vacuum quality. A detailed study of background can also help understanding the machine better to identify potential issues and complemented by dedicated machine tests. Finally such a study is relevant for the experiments to analyse the characteristics of machine background to make sure not to count it into a new physics signal. This report summarises simulation results of three background sources, local beam-gas, beam-halo from the betatron collimation in IR7 and for the first time beam-halo from momentum collimation in IR3. Two of the most dominant sources, local beam-gas and betatron halo, have been systematically studied for LHC Run 1 and Run 2 cases, and ...

  4. Thermodynamical aspects of running vacuum models

    Energy Technology Data Exchange (ETDEWEB)

    Lima, J.A.S. [Universidade de Sao Paulo, Departamento de Astronomia, Sao Paulo (Brazil); Basilakos, Spyros [Academy of Athens, Research Center for Astronomy and Applied Mathematics, Athens (Greece); Sola, Joan [Univ. de Barcelona, High Energy Physics Group, Dept. d' Estructura i Constituents de la Materia, Institut de Ciencies del Cosmos (ICC), Barcelona, Catalonia (Spain)

    2016-04-15

    The thermal history of a large class of running vacuum models in which the effective cosmological term is described by a truncated power series of the Hubble rate, whose dominant term is Λ(H) ∝ H{sup n+2}, is discussed in detail. Specifically, by assuming that the ultrarelativistic particles produced by the vacuum decay emerge into space-time in such a way that its energy density ρ{sub r} ∝ T{sup 4}, the temperature evolution law and the increasing entropy function are analytically calculated. For the whole class of vacuum models explored here we find that the primeval value of the comoving radiation entropy density (associated to effectively massless particles) starts from zero and evolves extremely fast until reaching a maximum near the end of the vacuum decay phase, where it saturates. The late-time conservation of the radiation entropy during the adiabatic FRW phase also guarantees that the whole class of running vacuum models predicts the same correct value of the present day entropy, S{sub 0} ∝ 10{sup 87}-10{sup 88} (in natural units), independently of the initial conditions. In addition, by assuming Gibbons¨CHawking temperature as an initial condition, we find that the ratio between the late-time and primordial vacuum energy densities is in agreement with naive estimates from quantum field theory, namely, ρ{sub Λ0}/ρ{sub ΛI} 10{sup -123}. Such results are independent on the power n and suggests that the observed Universe may evolve smoothly between two extreme, unstable, non-singular de Sitter phases. (orig.)

  5. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  6. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    International Nuclear Information System (INIS)

    Li, Yankai; Lin, Meng; Yang, Yanhua

    2016-01-01

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  7. How to help CERN to run more simulations

    CERN Multimedia

    The LHC@home team

    2016-01-01

    With LHC@home you can actively contribute to the computing capacity of the Laboratory!   You may think that CERN's large Data Centre and the Worldwide LHC Computing Grid have enough computing capacity for all the Laboratory’s users. However, given the massive amount of data coming from LHC experiments and other sources, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. This is an area where you can help, by installing BOINC and running simulations from LHC@home on your office PC or laptop. These background simulations will not disturb your work, as BOINC can be configured to automatically stop computing when your PC is in use. As mentioned in earlier editions of the Bulletin (see here and here), contributions from LHC@home volunteers have played a major role in LHC beam simulation studies. The computing capacity they made available corresponds to about half the capacity of the CERN...

  8. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Science.gov (United States)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  9. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Directory of Open Access Journals (Sweden)

    A. Najafi-Jilani

    2014-06-01

    Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  10. Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment

    Science.gov (United States)

    Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat

    2017-08-01

    The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.

  11. Experience gained in running the EPRI MMS code with an in-house simulation language

    International Nuclear Information System (INIS)

    Weber, D.S.

    1987-01-01

    The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language

  12. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  13. Simulating Ideal Assistive Devices to Reduce the Metabolic Cost of Running.

    Directory of Open Access Journals (Sweden)

    Thomas K Uchida

    Full Text Available Tools have been used for millions of years to augment the capabilities of the human body, allowing us to accomplish tasks that would otherwise be difficult or impossible. Powered exoskeletons and other assistive devices are sophisticated modern tools that have restored bipedal locomotion in individuals with paraplegia and have endowed unimpaired individuals with superhuman strength. Despite these successes, designing assistive devices that reduce energy consumption during running remains a substantial challenge, in part because these devices disrupt the dynamics of a complex, finely tuned biological system. Furthermore, designers have hitherto relied primarily on experiments, which cannot report muscle-level energy consumption and are fraught with practical challenges. In this study, we use OpenSim to generate muscle-driven simulations of 10 human subjects running at 2 and 5 m/s. We then add ideal, massless assistive devices to our simulations and examine the predicted changes in muscle recruitment patterns and metabolic power consumption. Our simulations suggest that an assistive device should not necessarily apply the net joint moment generated by muscles during unassisted running, and an assistive device can reduce the activity of muscles that do not cross the assisted joint. Our results corroborate and suggest biomechanical explanations for similar effects observed by experimentalists, and can be used to form hypotheses for future experimental studies. The models, simulations, and software used in this study are freely available at simtk.org and can provide insight into assistive device design that complements experimental approaches.

  14. Long-run properties of some Danish macroeconometric models

    DEFF Research Database (Denmark)

    Harck, Søren H.

    This paper provides an analytical treatment of various long-run aspects of the MONA model as well as the SMEC model of the Danish economy. More specifically, the analysis lays bare the long-run and steady-state nexus between unemployment and, respectively, inflation and the wage share implied by ...

  15. Modelling surface run-off and trends analysis over India

    Indian Academy of Sciences (India)

    responsible for run-off generation plays a major role in run-off modelling at region scales. Remote sensing, GIS and advancement of the computer technology based evaluation of land surface prop- erties at spatial and temporal scales are very useful input data for hydrological models. Using remote sensing data is not only ...

  16. Dynamics of the in-run in ski jumping: a simulation study.

    Science.gov (United States)

    Ettema, Gertjan J C; Bråten, Steinar; Bobbert, Maarten F

    2005-08-01

    A ski jumper tries to maintain an aerodynamic position in the in-run during changing environmental forces. The purpose of this study was to analyze the mechanical demands on a ski jumper taking the in-run in a static position. We simulated the in-run in ski jumping with a 4-segment forward dynamic model (foot, leg, thigh, and upper body). The curved path of the in-run was used as kinematic constraint, and drag, lift, and snow friction were incorporated. Drag and snow friction created a forward rotating moment that had to be counteracted by a plantar flexion moment and caused the line of action of the normal force to pass anteriorly to the center of mass continuously. The normal force increased from 0.88 G on the first straight to 1.65 G in the curve. The required knee joint moment increased more because of an altered center of pressure. During the transition from the straight to the curve there was a rapid forward shift of the center of pressure under the foot, reflecting a short but high angular acceleration. Because unrealistically high rates of change of moment are required, an athlete cannot do this without changing body configuration which reduces the required rate of moment changes.

  17. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)

    2017-05-24

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).

  18. Development of a fast running accident analysis computer program for use in a simulator

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1985-01-01

    This paper describes how a reactor safety nuclear computer program can be modified and improved with the aim of reaching a very fast running tool to be used as a physical model in a plant simulator, without penalizing the accuracy of results. It also discusses some ideas on how the physical theoretical model can be combined to a driving statistical tool for the build up of the entire package of software to be implemented in the simulator for risk and reliability analysis. The approach to the problem, although applied to a specific computer program, can be considered quite general if an already existing and well tested code is being used for the purpose. The computer program considered is ALMOD, originally developed for the analysis of the thermohydraulic and neutronic behaviour of the reactor core, primary circuit and steam generator during operational and special transients. (author)

  19. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  20. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    Science.gov (United States)

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  1. Running of the Scalar Spectral Index from Inflationary Models

    CERN Document Server

    Chung, D J H; Trodden, M; Chung, Daniel J.H.; Shiu, Gary; Trodden, Mark

    2003-01-01

    The scalar spectral index n is an important parameter describing the nature of primordial density perturbations. Recent data, including that from the WMAP satellite, shows some evidence that the index runs (changes as a function of the scale k at which it is measured) from n>1 (blue) on long scales to n<1 (red) on short scales. We investigate the extent to which inflationary models can accomodate such significant running of n. We present several methods for constructing large classes of potentials which yield a running spectral index. We show that within the slow-roll approximation, the fact that n-1 changes sign from blue to red forces the slope of the potential to reach a minimum at a similar field location. We also briefly survey the running of the index in a wider class of inflationary models, including a subset of those with non-minimial kinetic terms.

  2. Distributed Simulation Run Automation Capability for the Navy PRA Testbed

    National Research Council Canada - National Science Library

    Trbovich, Sarah; Reading, Richard; Photinos, William; Malone, Shala

    2007-01-01

    The Navy Probability of Raid Annihilation (PRA) Testbed implements HLA federated simulations of ship combat system elements against independent, reactive threat raids in a common environment to formulate an overall combat system assessment...

  3. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  4. Simulations of flow and prediction of sediment movement in Wymans Run, Cochranton Borough, Crawford County, Pennsylvania

    Science.gov (United States)

    Hittle, Elizabeth

    2011-01-01

    In small watersheds, runoff entering local waterways from large storms can cause rapid and profound changes in the streambed that can contribute to flooding. Wymans Run, a small stream in Cochranton Borough, Crawford County, experienced a large rain event in June 2008 that caused sediment to be deposited at a bridge. A hydrodynamic model, Flow and Sediment Transport and Morphological Evolution of Channels (FaSTMECH), which is incorporated into the U.S. Geological Survey Multi-Dimensional Surface-Water Modeling System (MD_SWMS) was constructed to predict boundary shear stress and velocity in Wymans Run using data from the June 2008 event. Shear stress and velocity values can be used to indicate areas of a stream where sediment, transported downstream, can be deposited on the streambed. Because of the short duration of the June 2008 rain event, streamflow was not directly measured but was estimated using U.S. Army Corps of Engineers one-dimensional Hydrologic Engineering Centers River Analysis System (HEC-RAS). Scenarios to examine possible engineering solutions to decrease the amount of sediment at the bridge, including bridge expansion, channel expansion, and dredging upstream from the bridge, were simulated using the FaSTMECH model. Each scenario was evaluated for potential effects on water-surface elevation, boundary shear stress, and velocity.

  5. Short Trail Running Race: Beyond the Classic Model for Endurance Running Performance.

    Science.gov (United States)

    Ehrström, Sabine; Tartaruga, Marcus P; Easthope, Christopher S; Brisswalter, Jeanick; Morin, Jean-Benoit; Vercruyssen, Fabrice

    2018-03-01

    This study aimed to examine the extent to which the classical physiological variables of endurance running performance (maximal oxygen uptake (V˙O2max), %V˙O2max at ventilatory threshold (VT), and running economy (RE)) but also muscle strength factors contribute to short trail running (TR) performance. A homogeneous group of nine highly trained trail runners performed an official TR race (27 km) and laboratory-based sessions to determine V˙O2max, %V˙O2max at VT, level RE (RE0%) and RE on a +10% slope, maximal voluntary concentric and eccentric knee extension torques, local endurance assessed by a fatigue index (FI), and a time to exhaustion at 87.5% of the velocity associated with V˙O2max. A simple regression method and commonality analysis identifying unique and common coefficients of each independent variable were used to determine the best predictors for the TR race time (dependent variable). Pearson correlations showed that FI and V˙O2max had the highest correlations (r = 0.91 and r = -0.76, respectively) with TR performance. The other selected variables were not significantly correlated with TR performance. The analysis of unique and common coefficients of relative V˙O2max, %V˙O2max at VT, and RE0% provides a low prediction of TR performance (R = 0.48). However, adding FI and RE on a +10% slope (instead of RE0%) markedly improved the predictive power of the model (R = 0.98). FI and V˙O2max showed the highest unique (49.8% and 20.4% of total effect, respectively) and common (26.9% of total effect) contributions to the regression equation. The classic endurance running model does not allow for meaningful prediction of short TR performance. Incorporating more specific factors into TR such as local endurance and gradient-specific RE testing procedures should be considered to better characterize short TR performance.

  6. EMC Simulation and Modeling

    Science.gov (United States)

    Takahashi, Takehiro; Schibuya, Noboru

    The EMC simulation is now widely used in design stage of electronic equipment to reduce electromagnetic noise. As the calculated electromagnetic behaviors of the EMC simulator depends on the inputted EMC model of the equipment, the modeling technique is important to obtain effective results. In this paper, simple outline of the EMC simulator and EMC model are described. Some modeling techniques of EMC simulation are also described with an example of the EMC model which is shield box with aperture.

  7. PEP Run Report for Simulant Shakedown/Functional Testing

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Geeting, John GH; Bredt, Ofelia P.; Burns, Carolyn A.; Golovich, Elizabeth C.; Guzman-Leong, Consuelo E.; Kurath, Dean E.; Sevigny, Gary J.

    2009-12-29

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Waste Treatment Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed, and operated as part of a plan to respond to issue M12, "Undemonstrated Leaching Processes." The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing. Two operating scenarios are currently being evaluated for the ultrafiltration process (UFP) and leaching operations. The first scenario has caustic leaching performed in the UFP-2 ultrafiltration feed vessels (i.e., vessel UFP-VSL-T02A in the PEP; and vessels UFP-VSL-00002A and B in the WTP PTF). The second scenario has caustic leaching conducted in the UFP-1 ultrafiltration feed preparation vessels (i.e., vessels UFP-VSL-T01A and B in the PEP; vessels UFP-VSL-00001A and B in the WTP PTF). In both scenarios, 19-M sodium hydroxide solution (NaOH, caustic) is added to the waste slurry in the vessels to leach solid aluminum compounds (e.g., gibbsite, boehmite). Caustic addition is followed by a heating step that uses direct injection of steam to accelerate the leach process. Following the caustic leach, the vessel contents are cooled using vessel cooling jackets and/or external heat exchangers. The main difference between the two scenarios is that for leaching in UFP-1, the 19-M NaOH is added to un-concentrated waste slurry (3-8 wt% solids), while for leaching in UFP-2, the slurry is concentrated to nominally 20 wt% solids using cross-flow ultrafiltration

  8. Comparison of minimalist footwear strategies for simulating barefoot running: a randomized crossover study.

    Directory of Open Access Journals (Sweden)

    Karsten Hollander

    Full Text Available Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1 performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1 using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001 at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001 between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1, followed by running with uncushioned minimalist shoes (62.9%, cushioned minimalist (88.6% and standard shoes (94.3%. Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.

  9. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model.

    Directory of Open Access Journals (Sweden)

    Behnam Dadashzadeh

    Full Text Available This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles.

  10. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model.

    Science.gov (United States)

    Dadashzadeh, Behnam; Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles.

  11. New Constraints on the running-mass inflation model

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro

    2002-01-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...

  12. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  13. The running-mass inflation model and WMAP

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...

  14. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  15. 3D Finite Element Simulation of Micro End-Milling by Considering the Effect of Tool Run-Out

    DEFF Research Database (Denmark)

    Davoudinejad, Ali; Tosello, Guido; Parenti, Paolo

    2017-01-01

    Understanding the micro milling phenomena involved in the process is critical and difficult through physical experiments. This study presents a 3D finite element modeling (3D FEM) approach for the micro end-milling process on Al6082-T6. The proposed model employs a Lagrangian explicit finite...... element formulation to perform coupled thermo-mechanical transient analyses. FE simulations were performed at different cutting conditions to obtain realistic numerical predictions of chip formation, temperature distribution, and cutting forces by considering the effect of tool run-out in the model....... The radial run-out is a significant issue in micro milling processes and influences the cutting Stability due to chip load and force variations. The Johnson–Cook (JC) material constitutive model was applied and its constants were determined by an inverse method based on the experimental cutting forces...

  16. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    OpenAIRE

    Anh?j, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variati...

  17. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  18. The Effective Standard Model after LHC Run I

    CERN Document Server

    Ellis, John; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard $S,T$ formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run~1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  19. The effective Standard Model after LHC Run I

    International Nuclear Information System (INIS)

    Ellis, John; Sanz, Verónica; You, Tevong

    2015-01-01

    We treat the Standard Model as the low-energy limit of an effective field theory that incorporates higher-dimensional operators to capture the effects of decoupled new physics. We consider the constraints imposed on the coefficients of dimension-6 operators by electroweak precision tests (EWPTs), applying a framework for the effects of dimension-6 operators on electroweak precision tests that is more general than the standard S,T formalism, and use measurements of Higgs couplings and the kinematics of associated Higgs production at the Tevatron and LHC, as well as triple-gauge couplings at the LHC. We highlight the complementarity between EWPTs, Tevatron and LHC measurements in obtaining model-independent limits on the effective Standard Model after LHC Run 1. We illustrate the combined constraints with the example of the two-Higgs doublet model.

  20. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  1. The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy

    Directory of Open Access Journals (Sweden)

    Nathaniel T. Berry, Laurie Wideman, Edgar W. Shields, Claudio L. Battaglini

    2016-06-01

    Full Text Available Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU Duathlon simulation on maximal oxygen uptake (VO2max, ventilatory threshold (VT and running economy (RE within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max, maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01. This decrease was coupled with a significant reduction in running speed at VT (p = 0.015. A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001. The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds.

  2. New constraints on the running-mass inflation model

    International Nuclear Information System (INIS)

    Covi, L.; Lyth, D.H.; Melchiorri, A.

    2002-10-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)

  3. Running scenarios using the Waste Tank Safety and Operations Hanford Site model

    International Nuclear Information System (INIS)

    Stahlman, E.J.

    1995-11-01

    Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples

  4. Statistics for long irregular wave run-up on a plane beach from direct numerical simulations

    Science.gov (United States)

    Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys

    2017-04-01

    Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface

  5. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  6. Diagnostic value of run chart analysis: using likelihood ratios to compare run chart rules on simulated data series.

    Directory of Open Access Journals (Sweden)

    Jacob Anhøj

    Full Text Available Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines "unusually long" or "unusually few". Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules.

  7. Diagnostic value of run chart analysis: using likelihood ratios to compare run chart rules on simulated data series.

    Science.gov (United States)

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines "unusually long" or "unusually few". Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules.

  8. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  9. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    Directory of Open Access Journals (Sweden)

    E.Makarov

    2016-07-01

    Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels

  10. Modelling the long-run supply of coal

    International Nuclear Information System (INIS)

    Steenblik, R.P.

    1992-01-01

    There are many issues facing policy-makers in the fields of energy and the environment that require knowledge of coal supply and cost. Such questions arise in relation to decisions concerning, for example, the discontinuation of subsidies, or the effects of new environmental laws. The very complexity of these questions makes them suitable for analysis by models. Indeed, models have been used for analysing the behaviour of coal markets and the effects of public policies on them for many years. For estimating short-term responses econometric models are the most suitable. For estimating the supply of coal over the longer term, however - i.e., coal that would come from mines as yet not developed - depletion has to be taken into account. Underlying the normal supply curve relating cost to the rate of production is a curve that increases with cumulative production - what mineral economists refer to as the potential supply curve. To derive such a curve requires at some point in the analysis using process-oriented modelling techniques. Because coal supply curves can convey so succinctly information about the resource's long-run supply potential and costs, they have been influential in several major public debates on energy policy. And, within the coal industry itself, they have proved to be powerful tools for undertaking market research and long-range planning. The purpose of this paper is to describe in brief the various approaches that have been used to model long-run coal supply, to highlight their strengths, and to identify areas in which further progress is needed. (author)

  11. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  12. Integrating spatio-temporal environmental models for planning ski runs

    NARCIS (Netherlands)

    Pfeffer, Karin

    2003-01-01

    The establishment of ski runs and ski lifts, the action of skiing and maintenance of ski runs may cause considerable environmental impact. Clearly, for improvements to be made in the planning of ski runs in alpine terrain a good understanding of the environmental system and the response of

  13. Evaluation of countermeasures for red light running by traffic simulator-based surrogate safety measures.

    Science.gov (United States)

    Lee, Changju; So, Jaehyun Jason; Ma, Jiaqi

    2018-01-02

    The conflicts among motorists entering a signalized intersection with the red light indication have become a national safety issue. Because of its sensitivity, efforts have been made to investigate the possible causes and effectiveness of countermeasures using comparison sites and/or before-and-after studies. Nevertheless, these approaches are ineffective when comparison sites cannot be found, or crash data sets are not readily available or not reliable for statistical analysis. Considering the random nature of red light running (RLR) crashes, an inventive approach regardless of data availability is necessary to evaluate the effectiveness of each countermeasure face to face. The aims of this research are to (1) review erstwhile literature related to red light running and traffic safety models; (2) propose a practical methodology for evaluation of RLR countermeasures with a microscopic traffic simulation model and surrogate safety assessment model (SSAM); (3) apply the proposed methodology to actual signalized intersection in Virginia, with the most prevalent scenarios-increasing the yellow signal interval duration, installing an advance warning sign, and an RLR camera; and (4) analyze the relative effectiveness by RLR frequency and the number of conflicts (rear-end and crossing). All scenarios show a reduction in RLR frequency (-7.8, -45.5, and -52.4%, respectively), but only increasing the yellow signal interval duration results in a reduced total number of conflicts (-11.3%; a surrogate safety measure of possible RLR-related crashes). An RLR camera makes the greatest reduction (-60.9%) in crossing conflicts (a surrogate safety measure of possible angle crashes), whereas increasing the yellow signal interval duration results in only a 12.8% reduction of rear-end conflicts (a surrogate safety measure of possible rear-end crash). Although increasing the yellow signal interval duration is advantageous because this reduces the total conflicts (a possibility of total

  14. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  15. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  16. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  17. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  18. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh tsunamis. The extension is straight forward, requiring only...... show that the long-time (fully nonlinear) evolution of waves resulting from an upthrusted bottom can eventually result in true solitary waves, consistent with theoretical predictions. It is stressed, however, that the nonlinearity used far exceeds that typical of geophysical tsunamis in the open ocean....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  19. Improved running economy in elite runners after 20 days of simulated moderate-altitude exposure.

    Science.gov (United States)

    Saunders, P U; Telford, R D; Pyne, D B; Cunningham, R B; Gore, C J; Hahn, A G; Hawley, J A

    2004-03-01

    To investigate the effect of altitude exposure on running economy (RE), 22 elite distance runners [maximal O(2) consumption (Vo(2)) 72.8 +/- 4.4 ml x kg(-1) x min(-1); training volume 128 +/- 27 km/wk], who were homogenous for maximal Vo(2) and training, were assigned to one of three groups: live high (simulated altitude of 2,000-3,100 m)-train low (LHTL; natural altitude of 600 m), live moderate-train moderate (LMTM; natural altitude of 1,500-2,000 m), or live low-train low (LLTL; natural altitude of 600 m) for a period of 20 days. RE was assessed during three submaximal treadmill runs at 14, 16, and 18 km/h before and at the completion of each intervention. Vo(2), minute ventilation (Ve), respiratory exchange ratio, heart rate, and blood lactate concentration were determined during the final 60 s of each run, whereas hemoglobin mass (Hb(mass)) was measured on a separate occasion. All testing was performed under normoxic conditions at approximately 600 m. Vo(2) (l/min) averaged across the three submaximal running speeds was 3.3% lower (P = 0.005) after LHTL compared with either LMTM or LLTL. Ve, respiratory exchange ratio, heart rate, and Hb(mass) were not significantly different after the three interventions. There was no evidence of an increase in lactate concentration after the LHTL intervention, suggesting that the lower aerobic cost of running was not attributable to an increased anaerobic energy contribution. Furthermore, the improved RE could not be explained by a decrease in Ve or by preferential use of carbohydrate as a metabolic substrate, nor was it related to any change in Hb(mass). We conclude that 20 days of LHTL at simulated altitude improved the RE of elite distance runners.

  20. Hydrologic and water-quality characterization and modeling of the Chenoweth Run basin, Jefferson County, Kentucky

    Science.gov (United States)

    Martin, Gary R.; Zarriello, Phillip J.; Shipp, Allison A.

    2001-01-01

    Rainfall, streamflow, and water-quality data collected in the Chenoweth Run Basin during February 1996?January 1998, in combination with the available historical sampling data, were used to characterize hydrologic conditions and to develop and calibrate a Hydrological Simulation Program?Fortran (HSPF) model for continuous simulation of rainfall, streamflow, suspended-sediment, and total-orthophosphate (TPO4) transport relations. Study results provide an improved understanding of basin hydrology and a hydrologic-modeling framework with analytical tools for use in comprehensive waterresource planning and management. Chenoweth Run Basin, encompassing 16.5 mi2 in suburban eastern Jefferson County, Kentucky, contains expanding urban development, particularly in the upper third of the basin. Historical water-quality problems have interfered with designated aquatic-life and recreation uses in the stream main channel (approximately 9 mi in length) and have been attributed to organic enrichment, nutrients, metals, and pathogens in urban runoff and wastewater inflows. Hydrologic conditions in Jefferson County are highly varied. In the Chenoweth Run Basin, as in much of the eastern third of the county, relief is moderately sloping to steep. Also, internal drainage in pervious areas is impeded by the shallow, fine-textured subsoils that contain abundant silts and clays. Thus, much of the precipitation here tends to move rapidly as overland flow and (or) shallow subsurface flow (interflow) to the stream channels. Data were collected at two streamflowgaging stations, one rain gage, and four waterquality- sampling sites in the basin. Precipitation, streamflow, and, consequently, constituent loads were above normal during the data-collection period of this study. Nonpoint sources contributed the largest portion of the sediment loads. However, the three wastewatertreatment plants (WWTP?s) were the source of the majority of estimated total phosphorus (TP) and TPO4 transport

  1. Tropical Cyclones in the 7-km NASA Global Nature Run for Use in Observing System Simulation Experiments

    Science.gov (United States)

    Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M.; Partyka, Gary

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community.

  2. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    International Nuclear Information System (INIS)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-01

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model

  3. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  4. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  5. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    Science.gov (United States)

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  6. Dynamical system approach to running Λ cosmological models

    International Nuclear Information System (INIS)

    Stachowski, Aleksander; Szydlowski, Marek

    2016-01-01

    We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g αβ ∇ α ∇ β φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)

  7. The Run 2 ATLAS Analysis Event Data Model

    CERN Document Server

    SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A

    2014-01-01

    During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...

  8. A rolling constraint reproduces ground reaction forces and moments in dynamic simulations of walking, running, and crouch gait.

    Science.gov (United States)

    Hamner, Samuel R; Seth, Ajay; Steele, Katherine M; Delp, Scott L

    2013-06-21

    Recent advances in computational technology have dramatically increased the use of muscle-driven simulation to study accelerations produced by muscles during gait. Accelerations computed from muscle-driven simulations are sensitive to the model used to represent contact between the foot and ground. A foot-ground contact model must be able to calculate ground reaction forces and moments that are consistent with experimentally measured ground reaction forces and moments. We show here that a rolling constraint can model foot-ground contact and reproduce measured ground reaction forces and moments in an induced acceleration analysis of muscle-driven simulations of walking, running, and crouch gait. We also illustrate that a point constraint and a weld constraint used to model foot-ground contact in previous studies produce inaccurate reaction moments and lead to contradictory interpretations of muscle function. To enable others to use and test these different constraint types (i.e., rolling, point, and weld constraints) we have included them as part of an induced acceleration analysis in OpenSim, a freely-available biomechanics simulation package. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Effects of Yaw Error on Wind Turbine Running Characteristics Based on the Equivalent Wind Speed Model

    Directory of Open Access Journals (Sweden)

    Shuting Wan

    2015-06-01

    Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.

  10. The 2014 Lake Askja rockslide tsunami - optimization of landslide parameters comparing numerical simulations with observed run-up

    Science.gov (United States)

    Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn

    2016-04-01

    The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run

  11. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Directory of Open Access Journals (Sweden)

    Jacob Anhøj

    Full Text Available A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence.We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts.The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  12. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  13. Planning of overhead contact lines and simulation of the pantograph running; Oberleitungsplanung und Simulation des Stromabnehmerlaufes

    Energy Technology Data Exchange (ETDEWEB)

    Hofbauer, Gerhard [ALPINE-ENERGIE Oesterreich GmbH, Linz (Austria); Hofbauer, Werner

    2009-07-01

    Using the software FLTG all planning steps for overhead contact lines can be carried out based on the parameters of the contact line type and the line data. Contact line supports and individual spans are presented graphically. The geometric interaction of pantograph and contact line can be simulated taking into account the pantograph type, its sway and the wind action. Thus, the suitability of a line for the interoperability of the transEuropean rail system can be demonstrated. (orig.)

  14. Modelling of Muscle Force Distributions During Barefoot and Shod Running

    Directory of Open Access Journals (Sweden)

    Sinclair Jonathan

    2015-09-01

    Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  15. PSH Transient Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-21

    PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.

  16. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  17. An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook

    Directory of Open Access Journals (Sweden)

    Jean-Luc Richard Stevens

    2013-12-01

    Full Text Available Lancet is a new, simulator-independent Python utility for succinctlyspecifying, launching, and collating results from large batches ofinterrelated computationally demanding program runs. This paperdemonstrates how to combine Lancet with IPython Notebook to provide aflexible, lightweight, and agile workflow for fully reproduciblescientific research. This informal and pragmatic approach usesIPython Notebook to capture the steps in a scientific computation asit is gradually automated and made ready for publication, withoutmandating the use of any separate application that can constrainscientific exploration and innovation. The resulting notebookconcisely records each step involved in even very complexcomputational processes that led to a particular figure or numericalresult, allowing the complete chain of events to be replicatedautomatically.Lancet was originally designed to help solve problems in computationalneuroscience, such as analyzing the sensitivity of a complexsimulation to various parameters, or collecting the results frommultiple runs with different random starting points. However, becauseit is never possible to know in advance what tools might be requiredin future tasks, Lancet has been designed to be completely general,supporting any type of program as long as it can be launched as aprocess and can return output in the form of files. For instance,Lancet is also heavily used by one of the authors in a separateresearch group for launching batches of microprocessor simulations.This general design will allow Lancet to continue supporting a givenresearch project even as the underlying approaches and tools change.

  18. Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis

    NARCIS (Netherlands)

    Tervonen, Tommi; van Valkenhoef, Gert; Basturk, Nalan; Postmus, Douwe

    2013-01-01

    Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the

  19. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    Science.gov (United States)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  20. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    Science.gov (United States)

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  1. Active site modeling in copper azurin molecular dynamics simulations

    NARCIS (Netherlands)

    Rizzuti, B; Swart, M; Sportelli, L; Guzzi, R

    Active site modeling in molecular dynamics simulations is investigated for the reduced state of copper azurin. Five simulation runs (5 ns each) were performed at room temperature to study the consequences of a mixed electrostatic/constrained modeling for the coordination between the metal and the

  2. Results from pion calibration runs for the H1 liquid argon calorimeter and comparisons with simulations

    International Nuclear Information System (INIS)

    Andrieu, B.; Ban, J.; Barrelet, E.; Bergstein, H.; Bernardi, G.; Besancon, M.; Binder, E.; Blume, H.; Borras, K.; Boudry, V.; Brasse, F.; Braunschweig, W.; Brisson, V.; Campbell, A.J.; Carli, T.; Colombo, M.; Coutures, C.; Cozzika, G.; David, M.; Delcourt, B.; DelBuono, L.; Devel, M.; Dingus, P.; Drescher, A.; Duboc, J.; Duenger, O.; Ebbinghaus, R.; Egli, S.; Ellis, N.N.; Feltesse, J.; Feng, Y.; Ferrarotto, F.; Flauger, W.; Flieser, M.; Gamerdinger, K.; Gayler, J.; Godfrey, L.; Goerlich, L.; Goldberg, M.; Graessler, R.; Greenshaw, T.; Greif, H.; Haguenauer, M.; Hajduk, L.; Hamon, O.; Hartz, P.; Haustein, V.; Haydar, R.; Hildesheim, W.; Huot, N.; Jabiol, M.A.; Jacholkowska, A.; Jaffre, M.; Jung, H.; Just, F.; Kiesling, C.; Kirchhoff, T.; Kole, F.; Korbel, V.; Korn, M.; Krasny, W.; Kubenka, J.P.; Kuester, H.; Kurzhoefer, J.; Kuznik, B.; Lander, R.; Laporte, J.F.; Lenhardt, U.; Loch, P.; Lueers, D.; Marks, J.; Martyniak, J.; Merz, T.; Naroska, B.; Nau, A.; Nguyen, H.K.; Niebergall, F.; Oberlack, H.; Obrock, U.; Ould-Saada, F.; Pascaud, C.; Pyo, H.B.; Rauschnabel, K.; Ribarics, P.; Rietz, M.; Royon, C.; Rusinov, V.; Sahlmann, N.; Sanchez, E.; Schacht, P.; Schleper, P.; Schlippe, W. von; Schmidt, C.; Schmidt, D.; Shekelyan, V.; Shooshtari, H.; Sirois, Y.; Staroba, P.; Steenbock, M.; Steiner, H.; Stella, B.; Straumann, U.; Turnau, J.; Tutas, J.; Urban, L.; Vallee, C.; Vecko, M.; Verrecchia, P.; Villet, G.; Vogel, E.; Wagener, A.; Wegener, D.; Wegner, A.; Wellisch, H.P.; Yiou, T.P.; Zacek, J.; Zeitnitz, Ch.; Zomer, F.

    1993-01-01

    We present results on calibration runs performed with pions at CERN SPS for different modules of the H1 liquid argon calorimeter which consists of an electromagnetic section with lead absorbers and a hadronic section with steel absorbers. The data cover an energy range from 3.7 to 205 GeV. Detailed comparisons of the data and simulation with GHEISHA 8 in the framework of GEANT 3.14 are presented. The measured pion induced shower profiles are well described by the simulation. The total signal of pions on an energy scale determined from electron measurements is reproduced to better than 3% in various module configurations. After application of weighting functions, determined from Monte Carlo data and needed to achieve compensation, the reconstructed measured energies agree with simulation to about 3%. The energies of hadronic showers are reconstructed with a resolution of about 50%/√E + 2%. This result is achieved by inclusion of signals from an iron streamer tube tail catcher behind the liquid argon stacks. (orig.)

  3. DWPF FLOWSHEET STUDIES WITH SIMULANTS TO DETERMINE MCU SOLVENT BUILD-UP IN CONTINOUS RUNS

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, D; Frances Williams, F; S Crump, S; Russell Eibling, R; Thomas02 White, T; David Best, D

    2006-05-25

    quantify the organic distribution in the CPC vessels. The earlier rounds of testing used a Sludge Batch 4 (SB4) simulant since it was anticipated that both of these facilities would begin salt processing during SB4 processing. The same sludge simulant recipe was used in this round of MCU testing to minimize the number of changes between the three phases of testing so a better comparison could be made. The MCU stream simulant was fabricated to perform the testing. The MCU stream represented the ''Maximum Volume'' case from the material balances provided by Campbell. ARP addition was not performed during this set of runs since the ARP evaluation had been completed in earlier runs. The MCU stream was added at boiling during the normal reflux phase of the SRAT cycle. SRAT cycle completion corresponded to the end of MCU stream addition. A total of ten 4-liter SRAT runs were performed to meet the objectives of the testing. The first series of five tests evaluated the organic portioning and mass balance for the addition of 50 mg/kg solvent. The second series of five tests evaluated the organic portioning and mass balance for the addition of 125 mg/kg solvent. A solvent concentration of 50 mg/kg is close to the nominal concentration anticipated in the effluent from the Salt Waste Processing Facility (SWPF). The organic solvent used in the testing was fabricated by the Chemical Science & Technology section. BOBCalixC6 was not added to this solvent due to the high cost and limited availability. All runs targeted 150% acid stoichiometry and 1% Hg in the sludge slurry dried solids.

  4. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  5. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  6. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  7. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  8. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  9. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.

  10. File Specification for the 7-km GEOS-5 Nature Run, Ganymed Release Non-Hydrostatic 7-km Global Mesoscale Simulation

    Science.gov (United States)

    da Silva, Arlindo M.; Putman, William; Nattala, J.

    2014-01-01

    details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.

  11. Systems-level computational modeling demonstrates fuel selection switching in high capacity running and low capacity running rats

    Science.gov (United States)

    Qi, Nathan R.

    2018-01-01

    High capacity and low capacity running rats, HCR and LCR respectively, have been bred to represent two extremes of running endurance and have recently demonstrated disparities in fuel usage during transient aerobic exercise. HCR rats can maintain fatty acid (FA) utilization throughout the course of transient aerobic exercise whereas LCR rats rely predominantly on glucose utilization. We hypothesized that the difference between HCR and LCR fuel utilization could be explained by a difference in mitochondrial density. To test this hypothesis and to investigate mechanisms of fuel selection, we used a constraint-based kinetic analysis of whole-body metabolism to analyze transient exercise data from these rats. Our model analysis used a thermodynamically constrained kinetic framework that accounts for glycolysis, the TCA cycle, and mitochondrial FA transport and oxidation. The model can effectively match the observed relative rates of oxidation of glucose versus FA, as a function of ATP demand. In searching for the minimal differences required to explain metabolic function in HCR versus LCR rats, it was determined that the whole-body metabolic phenotype of LCR, compared to the HCR, could be explained by a ~50% reduction in total mitochondrial activity with an additional 5-fold reduction in mitochondrial FA transport activity. Finally, we postulate that over sustained periods of exercise that LCR can partly overcome the initial deficit in FA catabolic activity by upregulating FA transport and/or oxidation processes. PMID:29474500

  12. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Impaired voluntary wheel running behavior in the unilateral 6-hydroxydopamine rat model of Parkinson's disease.

    Science.gov (United States)

    Pan, Qi; Zhang, Wangming; Wang, Jinyan; Luo, Fei; Chang, Jingyu; Xu, Ruxiang

    2015-02-01

    The aim of this study was to investigate voluntary wheel running behavior in the unilateral 6-hydroxydopamine (6-OHDA) rat model. Male Sprague-Dawley rats were assigned to 2 groups : 6-OHDA group (n=17) and control group (n=8). The unilateral 6-OHDA rat model was induced by injection of 6-OHDA into unilateral medial forebrain bundle using a stereotaxic instrument. Voluntary wheel running activity was assessed per day in successfully lesioned rats (n=10) and control rats. Each behavioral test lasted an hour. The following parameters were investigated during behavioral tests : the number of running bouts, the distance moved in the wheel, average peak speed in running bouts and average duration from the running start to the peak speed. The number of running bouts and the distance moved in the wheel were significantly decreased in successfully lesioned rats compared with control rats. In addition, average peak speed in running bouts was decreased, and average duration from the running start to the peak speed was increased in lesioned animals, which might indicate motor deficits in these rats. These behavioral changes were still observed 42 days after lesion. Voluntary wheel running behavior is impaired in the unilateral 6-OHDA rat model and may represent a useful tool to quantify motor deficits in this model.

  14. Design and simulation of a control system for a run-off-river power plant; Entwurf und Simulation einer Staustufenregelung

    Energy Technology Data Exchange (ETDEWEB)

    Ott, C.

    2000-07-01

    In run-off-river plants with low discharge and under head-control, changes of inflow lead to amplified changes of outflow. In this thesis a frequency-domain-based design-procedure is introduced, which allows to add an inflow-dependent signal to the head-controller of conventional combined head- and flow-controllers. This efficiently minimizes the discharge amplification. The non-linearity of the channel-reach is taken into consideration by adapting the settings of the controller to the actual discharge. The development of a time-domain-based program system, taking into account all nonlinearities of a run-off-river-plant, is described. Using different test-functions, the capability of the improved combined head- and flow-control can be demonstrated. In both the time- and the frequency-domain it is shown, that the quality of control is not influenced to a significant extent by the inevitable inaccuracies in the description of the channel-reach and in the measurement of the actual inflow and outflow. (orig.) [German] Die Arbeit bietet eine Loesung fuer das Problem, dass im Niedrigwasserbereich wasserstandsgeregelter Staustufen Zuflussaenderungen durch die Staustufe verstaerkt an den Unterlieger weitergegeben werden. Als Problemloesung wird ein frequenzbereichsgestuetztes Entwurfsverfahren vorgestellt, mit dem die gebraeuchliche OW-Q-Regelung um eine zuflussabhaengige Aufschaltung auf den Pegelregler erweitert werden kann. Zusammen mit der Aufschaltung des Zuflusses auf den Abflussregler wird damit die Durchflussverstaerkung deutlich reduziert. Die Nichtlinearitaet der Regelstrecke 'Stauraum' wird durch eine Parameteradaption an den Staustufendurchfluss beruecksichtigt. Weiterhin wird die Entwicklung eines Programmsystems zur nichtlinearen Simulation einer Staustufenkette im Zeitbereich beschrieben. Damit kann anhand verschiedener Lastfaelle die Leistungsfaehigkeit der verbesserten OW-Q-Regelung nachgewiesen werden. Es wird im Zeit- und Frequenzbereich

  15. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  16. Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear

    Directory of Open Access Journals (Sweden)

    Michálek T.

    2015-06-01

    Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.

  17. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  18. Modelling Energy Loss Mechanisms and a Determination of the Electron Energy Scale for the CDF Run II W Mass Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)

    2012-06-15

    The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → eve channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c2 and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c2. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.

  19. Modelling and Simulation: An Overview

    OpenAIRE

    McAleer, Michael; Chan, Felix; Oxley, Les

    2013-01-01

    This discussion paper resulted in a publication in 'Selected Papers of the MSSANZ 19th Biennial Conference on Modelling and Simulation Mathematics and Computers in Simulation', 2013, pp. viii. The papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are born equal: the emp...

  20. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  1. Model based control for run-of-river system. Part 1: Model implementation and tuning

    Directory of Open Access Journals (Sweden)

    Liubomyr Vytvytskyi

    2015-10-01

    Full Text Available Optimal operation and control of a run-of-river hydro power plant depends on good knowledge of the elements of the plant in the form of models. River reaches are often considered shallow channels with free surfaces. A typical model for such reaches use the Saint Venant model, which is a 1D distributed model based on the mass and momentum balances. This combination of free surface and momentum balance makes the problem numerically challenging to solve. The finite volume method with staggered grid was compared with the Kurganov-Petrova central upwind scheme, and was used to illustrate the dynamics of the river upstream from the Grønvollfoss run-of-river power plant in Telemark, Norway, operated by Skagerak Energi AS. In an experiment on the Grønvollfoss run-of-river power plant, a step was injected in the upstream inlet flow at Årlifoss, and the resulting change in level in front of the dam at the Grønvollfoss plant was logged. The results from the theoretical Saint Venant model was then compared to the experimental results. Because of uncertainties in the geometry of the river reach (river bed slope, etc., the slope and length of the varying slope parts were tuned manually to improve the fit. Then, friction factor, river width and height drop of the river was tuned by minimizing a least squares criterion. The results of the improved model (numerically, tuned to experiments, is a model that can be further used for control synthesis and analysis.

  2. A parallel computational model for GATE simulations.

    Science.gov (United States)

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  4. Flood Peak Estimation Using Rainfall Run off Models | Matondo ...

    African Journals Online (AJOL)

    The design of hydraulic structures such as road culverts, road bridges and dam spillways requires the determination of the design food peak. Two approaches are available in the determination of the design flood peak and these are: flood frequency analysis and rainfall runoff models. Flood frequency analysis requires a ...

  5. A long run intertemporal model of the oil market with uncertainty and strategic interaction

    International Nuclear Information System (INIS)

    Lensberg, T.; Rasmussen, H.

    1991-06-01

    This paper describes a model of the long run price uncertainty in the oil market. The main feature of the model is that the uncertainty about OPEC's price strategy is assumed to be generated not by irrational behavior on the part of OPEC, but by uncertainty about OPEC's size and time preference. The control of OPEC's pricing decision is assumed to shift among a set of OPEC-types over time according to a stochastic process, with each type implementing that price strategy which best fits the interests of its supporters. The model is fully dynamic on the supply side in the sense that all oil producers are assumed to understand the working of OPEC and the oil market, in particular, the non-OPEC producers base their investment decisions on rational price expectations. On the demand side, we assume that the market insight is less developed on the average, and model it by means of a long run demand curve on current prices and a simple lag structure. The long run demand curve for crude oil is generated by a fairly detailed static long-run equilibrium model of the product markets. Preliminary experience with the model indicate that prices are likely to stay below 20 dollars in the foreseeable future, but that prices around 30 dollars may occur if the present long run time perspective of OPEC is abandoned in favor of a more short run one. 26 refs., 4 figs., 7 tabs

  6. Implementation of the ATLAS Run 2 event data model

    CERN Document Server

    Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter

    2015-01-01

    During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...

  7. Implementation of the ATLAS Run 2 event data model

    Science.gov (United States)

    Buckley, A.; Eifert, T.; Elsing, M.; Gillberg, D.; Koeneke, K.; Krasznahorkay, A.; Moyse, E.; Nowak, M.; Snyder, S.; van Gemmeren, P.

    2015-12-01

    During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the ‘auxiliary store’). Rather than being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a ‘structure of arrays’ format, while the user still can access it as an ‘array of structures’. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user- defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This paper focuses on the design and implementation of the auxiliary store and its interaction with ROOT.

  8. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  9. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  10. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  11. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  12. Influence of running velocity on vertical, leg and joint stiffness : modelling and recommendations for future research.

    Science.gov (United States)

    Brughelli, Matt; Cronin, John

    2008-01-01

    Human running can be modelled as either a spring-mass model or multiple springs in series. A force is required to stretch or compress the spring, and thus stiffness, the variable of interest in this paper, can be calculated from the ratio of this force to the change in spring length. Given the link between force and length change, muscle stiffness and mechanical stiffness have been areas of interest to researchers, clinicians, and strength and conditioning practitioners for many years. This review focuses on mechanical stiffness, and in particular, vertical, leg and joint stiffness, since these are the only stiffness types that have been directly calculated during human running. It has been established that as running velocity increases from slow-to-moderate values, leg stiffness remains constant while both vertical stiffness and joint stiffness increase. However, no studies have calculated vertical, leg or joint stiffness over a range of slow-to-moderate values to maximum values in an athletic population. Therefore, the effects of faster running velocities on stiffness are relatively unexplored. Furthermore, no experimental research has examined the effects of training on vertical, leg or joint stiffness and the subsequent effects on running performance. Various methods of training (Olympic style weightlifting, heavy resistance training, plyometrics, eccentric strength training) have shown to be effective at improving running performance. However, the effects of these training methods on vertical, leg and joint stiffness are unknown. As a result, the true importance of stiffness to running performance remains unexplored, and the best practice for changing stiffness to optimize running performance is speculative at best. It is our hope that a better understanding of stiffness, and the influence of running speed on stiffness, will lead to greater interest and an increase in experimental research in this area.

  13. Towards a numerical run-out model for quick-clay slides

    Science.gov (United States)

    Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.

    2015-04-01

    quasi-three-dimensional codes with a choice of bed-friction laws. The findings of the simulations point strongly towards the need for a different modeling approach that incorporates the essential physical features of quick-clay slides. The major requirement is a realistic description of remolding. A two-layer model is needed to describe the non-sensitive topsoil that often is passively advected by the slide. In many cases, the topography is rather complex so that 3D or quasi-3D (depth-averaged) models are required for realistic modeling of flow heights and velocities. Finally, since many Norwegian quick-clay slides run-out in a fjord (and may generate a tsunami), it is also desirable to explicitly account for buoyancy and hydrodynamic drag.

  14. RMCgui: a new interface for the workflow associated with running Reverse Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dove, Martin T; Rigg, Gary

    2013-01-01

    The Reverse Monte Carlo method enables construction and refinement of large atomic models of materials that are tuned to give best agreement with experimental data such as neutron and x-ray total scattering data, capturing both the average structure and fluctuations. The practical drawback with the current implementations of this approach is the relatively complex workflow required, from setting up the configuration and simulation details through to checking the final outputs and analysing the resultant configurations. In order to make this workflow more accessible to users, we have developed an end-to-end workflow wrapped within a graphical user interface—RMCgui—designed to make the Reverse Monte Carlo more widely accessible. (paper)

  15. Characterisation of the responsive properties of two running-specific prosthetic models.

    Science.gov (United States)

    Grobler, Lara; Ferreira, Suzanne; Vanwanseele, Benedicte; Terblanche, Elmarie E

    2017-04-01

    The need for information regarding running-specific prosthetic properties has previously been voiced. Such information is necessary to assist in athletes' prostheses selection. This study aimed to describe the characteristics of two commercially available running-specific prostheses. The running-specific prostheses were tested (in an experimental setup) without the external interference of athlete performance variations. Four stiffness categories of each running-specific prosthetic model (Xtend ™ and Xtreme ™ ) were tested at seven alignment setups and three drop masses (28, 38 and 48 kg). Results for peak ground reaction force (GRF peak ), contact time ( t c ), flight time ( t f ), reactive strength index (RSI) and maximal compression (Δ L) were determined during controlled dropping of running-specific prostheses onto a force platform with different masses attached to the experimental setup. No statistically significant differences were found between the different setups of the running-specific prostheses. Statistically significant differences were found between the two models for all outcome variables (GRF peak , Xtend > Xtreme; t c , Xtreme > Xtend; t f , Xtreme > Xtend; RSI, Xtend > Xtreme; Δ L, Xtreme > Xtend; p prosthetic choice. Physiologically and metabolically, a short sprint event (i.e. 100 m) places different demands on the athlete than a long sprint event (i.e. 400 m), and the RSP should match these performance demands.

  16. Introducing MOZLEAP: an integrated long-run scenario model of the emerging energy sector of Mozambique

    NARCIS (Netherlands)

    Mahumane, G; Mulder, P.

    2016-01-01

    Since recently Mozambique is actively developing its large reserves of coal, natural gas and hydropower. Against this background, we present the first integrated long-run scenario model of the Mozambican energy sector. Our model, which we name MOZLEAP, is calibrated on the basis of recently

  17. Higher-order effects in asset-pricing models with long-run risks

    NARCIS (Netherlands)

    Pohl, W.; Schmedders, K.; Wilms, Ole

    This paper shows that the latest generation of asset pricing models with long-run risk exhibits economically significant nonlinearities, and thus the ubiquitous Campbell--Shiller log-linearization can generate large numerical errors. These errors in turn translate to considerable errors in the model

  18. TREAT Modeling and Simulation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  19. Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations

    Science.gov (United States)

    Sung, Christopher Teh Boon

    2011-01-01

    Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…

  20. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  1. Usefulness of running wheel for detection of congestive heart failure in dilated cardiomyopathy mouse model.

    Directory of Open Access Journals (Sweden)

    Masami Sugihara

    Full Text Available BACKGROUND: Inherited dilated cardiomyopathy (DCM is a progressive disease that often results in death from congestive heart failure (CHF or sudden cardiac death (SCD. Mouse models with human DCM mutation are useful to investigate the developmental mechanisms of CHF and SCD, but knowledge of the severity of CHF in live mice is necessary. We aimed to diagnose CHF in live DCM model mice by measuring voluntary exercise using a running wheel and to determine causes of death in these mice. METHODOLOGY/PRINCIPAL FINDINGS: A knock-in mouse with a mutation in cardiac troponin T (ΔK210 (DCM mouse, which results in frequent death with a t(1/2 of 70 to 90 days, was used as a DCM model. Until 2 months of age, average wheel-running activity was similar between wild-type and DCM mice (approximately 7 km/day. At approximately 3 months, some DCM mice demonstrated low running activity (LO: 5 km/day. In the LO group, the lung weight/body weight ratio was much higher than that in the other groups, and the lungs were infiltrated with hemosiderin-loaded alveolar macrophages. Furthermore, echocardiography showed more severe ventricular dilation and a lower ejection fraction, whereas Electrocardiography (ECG revealed QRS widening. There were two patterns in the time courses of running activity before death in DCM mice: deaths with maintained activity and deaths with decreased activity. CONCLUSIONS/SIGNIFICANCE: Our results indicate that DCM mice with low running activity developed severe CHF and that running wheels are useful for detection of CHF in mouse models. We found that approximately half of ΔK210 DCM mice die suddenly before onset of CHF, whereas others develop CHF, deteriorate within 10 to 20 days, and die.

  2. An investigation into the effects of a simulated effusion in healthy subjects on knee kinematics during jogging and running.

    Science.gov (United States)

    Coughlan, Garrett F; McLoughlin, Rod; McCarthy Persson, Ulrik; Caulfield, Brian M

    2008-10-01

    Knee joint effusion can lead to changes in the activation of surrounding musculature and result in delayed return to baseline daily and sporting activity following injury. However, the effects of an isolated knee joint effusion on control of movement during cyclical activities such as gait are poorly understood. Knee angular displacement and velocity was measured during treadmill jogging (8 km h(-1)) and running (12 km h(-1)) in 12 healthy subjects before and after a simulated knee joint effusion. Two separate pre-effusion recordings were taken to account for test-retest variability in gait measurement techniques. Subjects demonstrated a small yet significant decrease in peak knee flexion following heel strike at 8 km h(-1) as a result of the effusion (Pjogging and running. Our results suggest that it may be prudent to consider measurement variability in future studies of this nature.

  3. Just Running Around: Some Reminiscences of Early Simulation/Gaming in the United Kingdom

    Science.gov (United States)

    van Ments, Morry

    2011-01-01

    The article begins with an abbreviated CV of the author and then recounts the formation of Society for the Advancement of Games and Simulation in Education and Training (SAGSET) and the early days of simulation and gaming in the United Kingdom. Four strands of elements of development are described together with the key events of the 1970s and…

  4. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  5. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...

  6. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  7. Modelling and Simulation: An Overview

    NARCIS (Netherlands)

    M.J. McAleer (Michael); F. Chan (Felix); L. Oxley (Les)

    2013-01-01

    textabstractThe papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are

  8. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  9. Randomly curved runs interrupted by tumbling: A model for bacterial motion

    Science.gov (United States)

    Condat, C. A.; Jäckle, J.; Menchón, S. A.

    2005-08-01

    Small bacteria are strongly buffeted by Brownian forces that make completely straight runs impossible. A model for bacterial motion is formulated in which the effects of fluctuational forces and torques on the run phase are taken into account by using coupled Langevin equations. An integrated description of the motion, including runs and tumbles, is then obtained by the use of convolution and Laplace transforms. The properties of the velocity-velocity correlation function, of the mean displacement, and of the two relevant diffusion coefficients are examined in terms of the bacterial sizes and of the magnitude of the propelling forces. For bacteria smaller than E. coli, the integrated diffusion coefficient crosses over from a jump-dominated to a rotational-diffusion-dominated form.

  10. Biomechanical modeling and sensitivity analysis of bipedal running ability. II. Extinct taxa.

    Science.gov (United States)

    Hutchinson, John R

    2004-10-01

    Using an inverse dynamics biomechanical analysis that was previously validated for extant bipeds, I calculated the minimum amount of actively contracting hindlimb extensor muscle that would have been needed for rapid bipedal running in several extinct dinosaur taxa. I analyzed models of nine theropod dinosaurs (including birds) covering over five orders of magnitude in size. My results uphold previous findings that large theropods such as Tyrannosaurus could not run very quickly, whereas smaller theropods (including some extinct birds) were adept runners. Furthermore, my results strengthen the contention that many nonavian theropods, especially larger individuals, used fairly upright limb orientations, which would have reduced required muscular force, and hence muscle mass. Additional sensitivity analysis of muscle fascicle lengths, moment arms, and limb orientation supports these conclusions and points out directions for future research on the musculoskeletal limits on running ability. Although ankle extensor muscle support is shown to have been important for all taxa, the ability of hip extensor muscles to support the body appears to be a crucial limit for running capacity in larger taxa. I discuss what speeds were possible for different theropod dinosaurs, and how running ability evolved in an inverse relationship to body size in archosaurs. 2004 Wiley-Liss, Inc.

  11. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    Science.gov (United States)

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  13. Modeling and Simulation: An Overview

    OpenAIRE

    Michael McAleer; Felix Chan; Les Oxley

    2013-01-01

    The papers in this special issue of Mathematics and Computers in Simulation cover the following topics. Improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are born equal. The empirical properties of some estimators of long memory, characterising trader manipulation in a limitorder driven market, measuring bias in a term-structure model of commodity prices through the c...

  14. Sludge batch 9 simulant runs using the nitric-glycolic acid flowsheet

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, D. P. [Savannah River Site (SRS), Aiken, SC (United States); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States); Brandenburg, C. H. [Savannah River Site (SRS), Aiken, SC (United States); Luther, M. C. [Savannah River Site (SRS), Aiken, SC (United States); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States); Woodham, W. H. [Savannah River Site (SRS), Aiken, SC (United States)

    2016-11-01

    Testing was completed to develop a Sludge Batch 9 (SB9) nitric-glycolic acid chemical process flowsheet for the Defense Waste Processing Facility’s (DWPF) Chemical Process Cell (CPC). CPC simulations were completed using SB9 sludge simulant, Strip Effluent Feed Tank (SEFT) simulant and Precipitate Reactor Feed Tank (PRFT) simulant. Ten sludge-only Sludge Receipt and Adjustment Tank (SRAT) cycles and four SRAT/Slurry Mix Evaporator (SME) cycles, and one actual SB9 sludge (SRAT/SME cycle) were completed. As has been demonstrated in over 100 simulations, the replacement of formic acid with glycolic acid virtually eliminates the CPC’s largest flammability hazards, hydrogen and ammonia. Recommended processing conditions are summarized in section 3.5.1. Testing demonstrated that the interim chemistry and Reduction/Oxidation (REDOX) equations are sufficient to predict the composition of DWPF SRAT product and SME product. Additional reports will finalize the chemistry and REDOX equations. Additional testing developed an antifoam strategy to minimize the hexamethyldisiloxane (HMDSO) peak at boiling, while controlling foam based on testing with simulant and actual waste. Implementation of the nitric-glycolic acid flowsheet in DWPF is recommended. This flowsheet not only eliminates the hydrogen and ammonia hazards but will lead to shorter processing times, higher elemental mercury recovery, and more concentrated SRAT and SME products. The steady pH profile is expected to provide flexibility in processing the high volume of strip effluent expected once the Salt Waste Processing Facility starts up.

  15. THE HORIZON RUN N-BODY SIMULATION: BARYON ACOUSTIC OSCILLATIONS AND TOPOLOGY OF LARGE-SCALE STRUCTURE OF THE UNIVERSE

    International Nuclear Information System (INIS)

    Kim, Juhan; Park, Changbom; Gott, J. Richard; Dubinski, John

    2009-01-01

    In support of the new III survey, which will measure the baryon oscillation scale using the luminous red galaxies (LRGs), we have run the largest N-body simulation to date using 4120 3 = 69.9 billion particles, and covering a volume of (6.592 h -1 Gpc) 3 . This is over 2000 times the volume of the Millennium Run, and corner-to-corner stretches all the way to the horizon of the visible universe. LRGs are selected by finding the most massive gravitationally bound, cold dark matter subhalos, not subject to tidal disruption, a technique that correctly reproduces the three-dimensional topology of the LRGs in the Sloan Survey. We have measured the covariance function, power spectrum, and the three-dimensional topology of the LRG distribution in our simulation and made 32 mock surveys along the past light cone to simulate the Sloan III survey. Our large N-body simulation is used to accurately measure the nonlinear systematic effects such as gravitational evolution, redshift space distortion, past light cone space gradient, and galaxy biasing, and to calibrate the baryon oscillation scale and the genus topology. For example, we predict from our mock surveys that the baryon acoustic oscillation peak scale can be measured with the cosmic variance-dominated uncertainty of about 5% when the SDSS-III sample is divided into three equal volume shells, or about 2.6% when a thicker shell with 0.4 -1 Mpc scale. We are making the simulation and mock surveys publicly available.

  16. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  17. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  18. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  19. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  20. Integrating Visualizations into Modeling NEST Simulations.

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  1. Integrating Visualizations into Modeling NEST Simulations

    Directory of Open Access Journals (Sweden)

    Christian eNowke

    2015-12-01

    Full Text Available Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  2. Integrating Visualizations into Modeling NEST Simulations

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W.

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work. PMID:26733860

  3. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...... to operate a boiler plant dynamically means that the boiler designs must be able to absorb any fluctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as possible. Due...

  4. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... to the internal pressure the consequence of the increased volume (i.e. water-/steam space) is an increased wall thickness in the pressure part of the boiler. The stresses introduced in the boiler pressure part as a result of the temperature gradients are proportional to the square of the wall thickness...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...

  5. NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System

    Science.gov (United States)

    Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.

  6. Modeling control in manufacturing simulation

    NARCIS (Netherlands)

    Zee, Durk-Jouke van der; Chick, S.; Sánchez, P.J.; Ferrin, D.; Morrice, D.J.

    2003-01-01

    A significant shortcoming of traditional simulation languages is the lack of attention paid to the modeling of control structures, i.e., the humans or systems responsible for manufacturing planning and control, their activities and the mutual tuning of their activities. Mostly they are hard coded

  7. Modeling and simulation of water flow on containment walls with inhomogeneous contact angle distribution

    Energy Technology Data Exchange (ETDEWEB)

    Amend, Katharina; Klein, Markus [Univ. der Bundeswehr Muenchen, Neubiberg (Germany). Inst. for Numerical Methods in Aerospace Engineering

    2017-07-15

    The paper presents a three-dimensional numerical simulation for water running down inclined surfaces using OpenFOAM. This research project aims at developing a CFD model to describe the run down behavior of liquids and the resulting wash down of fission products on surfaces in the reactor containment. An empirical contact angle model with wetted history is introduced as well as a filtered randomized initial contact angle field. Simulation results are in good agreement with the experiments. Experimental Investigation on Passive.

  8. A rational expectations model for simulation and policy evaluation of the Spanish economy

    OpenAIRE

    Boscá, J. E.; Díaz, A.; Doménech, R.; Ferri, J.; Pérez, E.

    2010-01-01

    This paper presents the model used for simulation purposes within the Spanish Ministry of Economic Affairs and Finance. REMS (a Rational Expectations Model for the Spanish economy) is a small open economy dynamic general equilibrium model in the vein of the New-Neoclassical-Keynesian synthesis models, with a strongly micro-founded system of equations. In the long run REMS behaves in accordance with the neoclassical growth model. In the short run, it incorporates nominal, real and financial fr...

  9. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  10. Biosensors for EVA: Muscle Oxygen and pH During Walking, Running and Simulated Reduced Gravity

    Science.gov (United States)

    Lee, S. M. C.; Ellerby, G.; Scott, P.; Stroud, L.; Norcross, J.; Pesholov, B.; Zou, F.; Gernhardt, M.; Soller, B.

    2009-01-01

    During lunar excursions in the EVA suit, real-time measurement of metabolic rate is required to manage consumables and guide activities to ensure safe return to the base. Metabolic rate, or oxygen consumption (VO2), is normally measured from pulmonary parameters but cannot be determined with standard techniques in the oxygen-rich environment of a spacesuit. Our group developed novel near infrared spectroscopic (NIRS) methods to calculate muscle oxygen saturation (SmO2), hematocrit, and pH, and we recently demonstrated that we can use our NIRS sensor to measure VO2 on the leg during cycling. Our NSBRI-funded project is looking to extend this methodology to examine activities which more appropriately represent EVA activities, such as walking and running and to better understand factors that determine the metabolic cost of exercise in both normal and lunar gravity. Our 4 year project specifically addresses risk: ExMC 4.18: Lack of adequate biomedical monitoring capability for Constellation EVA Suits and EPSP risk: Risk of compromised EVA performance and crew health due to inadequate EVA suit systems.

  11. The long-run forecasting of energy prices using the model of shifting trend

    International Nuclear Information System (INIS)

    Radchenko, Stanislav

    2005-01-01

    Developing models for accurate long-term energy price forecasting is an important problem because these forecasts should be useful in determining both supply and demand of energy. On the supply side, long-term forecasts determine investment decisions of energy-related companies. On the demand side, investments in physical capital and durable goods depend on price forecasts of a particular energy type. Forecasting long-run rend movements in energy prices is very important on the macroeconomic level for several developing countries because energy prices have large impacts on their real output, the balance of payments, fiscal policy, etc. Pindyck (1999) argues that the dynamics of real energy prices is mean-reverting to trend lines with slopes and levels that are shifting unpredictably over time. The hypothesis of shifting long-term trend lines was statistically tested by Benard et al. (2004). The authors find statistically significant instabilities for coal and natural gas prices. I continue the research of energy prices in the framework of continuously shifting levels and slopes of trend lines started by Pindyck (1999). The examined model offers both parsimonious approach and perspective on the developments in energy markets. Using the model of depletable resource production, Pindyck (1999) argued that the forecast of energy prices in the model is based on the long-run total marginal cost. Because the model of a shifting trend is based on the competitive behavior, one may examine deviations of oil producers from the competitive behavior by studying the difference between actual prices and long-term forecasts. To construct the long-run forecasts (10-year-ahead and 15-year-ahead) of energy prices, I modify the univariate shifting trends model of Pindyck (1999). I relax some assumptions on model parameters, the assumption of white noise error term, and propose a new Bayesian approach utilizing a Gibbs sampling algorithm to estimate the model with autocorrelation. To

  12. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  13. Modeling and Simulation for Safeguards

    Energy Technology Data Exchange (ETDEWEB)

    Swinhoe, Martyn T. [Los Alamos National Laboratory

    2012-07-26

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R&D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  14. Model continuity in discrete event simulation: A framework for model-driven development of simulation models

    NARCIS (Netherlands)

    Cetinkaya, D; Verbraeck, A.; Seck, MD

    2015-01-01

    Most of the well-known modeling and simulation (M&S) methodologies state the importance of conceptual modeling in simulation studies, and they suggest the use of conceptual models during the simulation model development process. However, only a limited number of methodologies refers to how to

  15. Constraints on running vacuum model with H ( z ) and f σ{sub 8}

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Yin, Lu, E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: yinlumail@foxmail.com [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China)

    2017-08-01

    We examine the running vacuum model with Λ ( H ) = 3 ν H {sup 2} + Λ{sub 0}, where ν is the model parameter and Λ{sub 0} is the cosmological constant. From the data of the cosmic microwave background radiation, weak lensing and baryon acoustic oscillation along with the time dependent Hubble parameter H ( z ) and weighted linear growth f ( z )σ{sub 8}( z ) measurements, we find that ν=(1.37{sup +0.72}{sub −0.95})× 10{sup −4} with the best fitted χ{sup 2} value slightly smaller than that in the ΛCDM model.

  16. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  17. Sea Ice Trends in Climate Models Only Accurate in Runs with Biased Global Warming

    Science.gov (United States)

    Rosenblum, Erica; Eisenman, Ian

    2017-08-01

    Observations indicate that the Arctic sea ice cover is rapidly retreating while the Antarctic sea ice cover is steadily expanding. State-of-the-art climate models, by contrast, typically simulate a moderate decrease in both the Arctic and Antarctic sea ice covers. However, in each hemisphere there is a small subset of model simulations that have sea ice trends similar to the observations. Based on this, a number of recent studies have suggested that the models are consistent with the observations in each hemisphere when simulated internal climate variability is taken into account. Here we examine sea ice changes during 1979-2013 in simulations from the most recent Coupled Model Intercomparison Project (CMIP5) as well as the Community Earth System Model Large Ensemble (CESM-LE), drawing on previous work that found a close relationship in climate models between global-mean surface temperature and sea ice extent. We find that all of the simulations with 1979-2013 Arctic sea ice retreat as fast as observed have considerably more global warming than observations during this time period. Using two separate methods to estimate the sea ice retreat that would occur under the observed level of global warming in each simulation in both ensembles, we find that simulated Arctic sea ice retreat as fast as observed would occur less than 1% of the time. This implies that the models are not consistent with the observations. In the Antarctic, we find that simulated sea ice expansion as fast as observed typically corresponds with too little global warming, although these results are more equivocal. We show that because of this, the simulations do not capture the observed asymmetry between Arctic and Antarctic sea ice trends. This suggests that the models may be getting the right sea ice trends for the wrong reasons in both polar regions.

  18. Effect of audio in-vehicle red light-running warning message on driving behavior based on a driving simulator experiment.

    Science.gov (United States)

    Yan, Xuedong; Liu, Yang; Xu, Yongcun

    2015-01-01

    Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.

  19. arXiv Simulation Study of an LWFA-based Electron Injector for AWAKE Run 2

    CERN Document Server

    Williamson, B.; Doebert, S.; Karsch, S.; Muggli, P.

    The AWAKE experiment aims to demonstrate preservation of injected electron beam quality during acceleration in proton-driven plasma waves. The short bunch duration required to correctly load the wakefield is challenging to meet with the current electron injector system, given the space available to the beamline. An LWFA readily provides short-duration electron beams with sufficient charge from a compact design, and provides a scalable option for future electron acceleration experiments at AWAKE. Simulations of a shock-front injected LWFA demonstrate a 43 TW laser system would be sufficient to produce the required charge over a range of energies beyond 100 MeV. LWFA beams typically have high peak current and large divergence on exiting their native plasmas, and optimisation of bunch parameters before injection into the proton-driven wakefields is required. Compact beam transport solutions are discussed.

  20. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  1. SIMULATION RESULTS OF RUNNING THE AGS MMPS, BY STORING ENERGY IN CAPACITOR BANKS.

    Energy Technology Data Exchange (ETDEWEB)

    MARNERIS, I.

    2006-09-01

    The Brookhaven AGS is a strong focusing accelerator which is used to accelerate protons and various heavy ion species to equivalent maximum proton energy of 29 GeV. The AGS Main Magnet Power Supply (MMPS) is a thyristor control supply rated at 5500 Amps, +/-go00 Volts. The peak magnet power is 49.5 Mwatts. The power supply is fed from a motor/generator manufactured by Siemens. The motor is rated at 9 MW, input voltage 3 phase 13.8 KV 60 Hz. The generator is rated at 50 MVA its output voltage is 3 phase 7500 Volts. Thus the peak power requirements come from the stored energy in the rotor of the motor/generator. The rotor changes speed by about +/-2.5% of its nominal speed of 1200 Revolutions per Minute. The reason the power supply is powered by the Generator is that the local power company (LIPA) can not sustain power swings of +/- 50 MW in 0.5 sec if the power supply were to be interfaced directly with the AC lines. The Motor Generator is about 45 years old and Siemens is not manufacturing similar machines in the future. As a result we are looking at different ways of storing energy and being able to utilize it for our application. This paper will present simulations of a power supply where energy is stored in capacitor banks. The simulation program used is called PSIM Version 6.1. The control system of the power supply will also be presented. The average power from LIPA into the power supply will be kept constant during the pulsing of the magnets at +/-50 MW. The reactive power will also be kept constant below 1.5 MVAR. Waveforms will be presented.

  2. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL

  3. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  4. Statistical 3D damage accumulation model for ion implant simulators

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided

  5. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  6. Modelling of thermalhydraulics and reactor physics in simulators

    International Nuclear Information System (INIS)

    Miettinen, J.

    1994-01-01

    The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)

  7. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  8. Creating Simulated Microgravity Patient Models

    Science.gov (United States)

    Hurst, Victor; Doerr, Harold K.; Bacal, Kira

    2004-01-01

    The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).

  9. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and fie...... as support decision making. However, several other factors affect decision making such as, ethics, politics and economics. Furthermore, the insight gained when models are build leads to point out areas where knowledge is lacking....... of FMD spread that can provide useful and trustworthy advises, there are four important issues, which the model should represent: 1) The herd structure of the country in question, 2) the dynamics of animal movements and contacts between herds, 3) the biology of the disease, and 4) the regulations...

  10. MACC regional multi-model ensemble simulations of birch pollen dispersion in Europe

    NARCIS (Netherlands)

    Sofiev, M.; Berger, U.; Prank, M.; Vira, J.; Arteta, J.; Belmonte, J.; Bergmann, K.C.; Chéroux, F.; Elbern, H.; Friese, E.; Galan, C.; Gehrig, R.; Khvorostyanov, D.; Kranenburg, R.; Kumar, U.; Marécal, V.; Meleux, F.; Menut, L.; Pessi, A.M.; Robertson, L.; Ritenberga, O.; Rodinkova, V.; Saarto, A.; Segers, A.; Severova, E.; Sauliene, I.; Siljamo, P.; Steensen, B.M.; Teinemaa, E.; Thibaudon, M.; Peuch, V.H.

    2015-01-01

    This paper presents the first ensemble modelling experiment in relation to birch pollen in Europe. The seven-model European ensemble of MACC-ENS, tested in trial simulations over the flowering season of 2010, was run through the flowering season of 2013. The simulations have been compared with

  11. Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Raul [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); Mühlleitner, Margarete [Institute for Theoretical Physics, Karlsruhe Institute of Technology,76128 Karlsruhe (Germany); Sampaio, Marco O.P. [Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); CIDMA - Center for Research Development in Mathematics and Applications,Campus de Santiago, 3810-183 Aveiro (Portugal); Santos, Rui [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa,Instituto Politécnico de Lisboa, 1959-007 Lisboa (Portugal)

    2016-06-07

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.

  12. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  13. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  14. The effect of treadmill running on passive avoidance learning in animal model of Alzheimer disease.

    Science.gov (United States)

    Hosseini, Nasrin; Alaei, Hojjatallah; Reisi, Parham; Radahmadi, Maryam

    2013-02-01

    Alzheimer's disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer's disease. Previous studies reported that exercise training may slow down the onset and progression of memory deficit in neurodegenerative disorders. This research investigates the effects of treadmill running on acquisition and retention time of passive avoidance deficits induced by ibotenic acid nucleus basalis magnocellularis lesion. MALE WISTAR RATS WERE RANDOMLY SELECTED AND DIVIDED INTO FIVE GROUPS AS FOLLOWS: Control, sham, Alzheimer, exercise before Alzheimer, and exercise groups. Treadmill running had a 21 day period and Alzheimer was induced by 5 μg/μl bilateral injection of ibotenic acid in nucleus basalis magnocellularis. Our results showed that ibotenic acid lesions significantly impaired passive avoidance acquisition (P exercise significantly (P < 0.001) improved passive avoidance learning in NBM-lesion rats. Treadmill running has a potential role in the prevention of learning and memory impairments in NBM-lesion rats.

  15. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  16. Suppressing correlations in massively parallel simulations of lattice models

    Science.gov (United States)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  17. Overall Preference of Running Shoes Can Be Predicted by Suitable Perception Factors Using a Multiple Regression Model.

    Science.gov (United States)

    Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah

    2017-05-01

    This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.

  18. A High-Speed Train Operation Plan Inspection Simulation Model

    Directory of Open Access Journals (Sweden)

    Yang Rui

    2018-01-01

    Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.

  19. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  20. Debris flow analysis with a one dimensional dynamic run-out model that incorporates entrained material

    Science.gov (United States)

    Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees

    2010-05-01

    Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).

  1. Long-Run Effects in Large Heterogeneous Panel Data Models with Cross-Sectionally Correlated Errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M Hashem; Raissi, Mehdi

    2016-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  2. Long-run effects in large heterogenous panel data models with cross-sectionally correlated errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M. Hashem; Raissi, Mehdi

    2015-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  3. Finite element modelling of Plantar Fascia response during running on different surface types

    Science.gov (United States)

    Razak, A. H. A.; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Hashim, M. S. M.; Daud, R.

    2017-10-01

    Plantar fascia is a ligament found in human foot structure located beneath the skin of human foot that functioning to stabilize longitudinal arch of human foot during standing and normal gait. To perform direct experiment on plantar fascia seems very difficult since the structure located underneath the soft tissue. The aim of this study is to develop a finite element (FE) model of foot with plantar fascia and investigate the effect of the surface hardness on biomechanical response of plantar fascia during running. The plantar fascia model was developed using Solidworks 2015 according to the bone structure of foot model that was obtained from Turbosquid database. Boundary conditions were set out based on the data obtained from experiment of ground reaction force response during running on different surface hardness. The finite element analysis was performed using Ansys 14. The results found that the peak of stress and strain distribution were occur on the insertion of plantar fascia to bone especially on calcaneal area. Plantar fascia became stiffer with increment of Young’s modulus value and was able to resist more loads. Strain of plantar fascia was decreased when Young’s modulus increased with the same amount of loading.

  4. Hydrological Modeling in the Bull Run Watershed in Support of a Piloting Utility Modeling Applications (PUMA) Project

    Science.gov (United States)

    Nijssen, B.; Chiao, T. H.; Lettenmaier, D. P.; Vano, J. A.

    2016-12-01

    Hydrologic models with varying complexities and structures are commonly used to evaluate the impact of climate change on future hydrology. While the uncertainties in future climate projections are well documented, uncertainties in streamflow projections associated with hydrologic model structure and parameter estimation have received less attention. In this study, we implemented and calibrated three hydrologic models (the Distributed Hydrology Soil Vegetation Model (DHSVM), the Precipitation-Runoff Modeling System (PRMS), and the Variable Infiltration Capacity model (VIC)) for the Bull Run watershed in northern Oregon using consistent data sources and best practice calibration protocols. The project was part of a Piloting Utility Modeling Applications (PUMA) project with the Portland Water Bureau (PWB) under the umbrella of the Water Utility Climate Alliance (WUCA). Ultimately PWB would use the model evaluation to select a model to perform in-house climate change analysis for Bull Run Watershed. This presentation focuses on the experimental design of the comparison project, project findings and the collaboration between the team at the University of Washington and at PWB. After calibration, the three models showed similar capability to reproduce seasonal and inter-annual variations in streamflow, but differed in their ability to capture extreme events. Furthermore, the annual and seasonal hydrologic sensitivities to changes in climate forcings differed among models, potentially attributable to different model representations of snow and vegetation processes.

  5. Impulse pumping modelling and simulation

    International Nuclear Information System (INIS)

    Pierre, B; Gudmundsson, J S

    2010-01-01

    Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.

  6. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  7. Modelling field scale spatial variation in water run-off, soil moisture, N2O emissions and herbage biomass of a grazed pasture using the SPACSYS model.

    Science.gov (United States)

    Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai

    2018-04-01

    In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.

  8. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  9. Estimating radiation and temperature data for crop simulation model

    International Nuclear Information System (INIS)

    Ferrer, A.B.; Centeno, H.G.S.; Sheehy, J.E.

    1996-01-01

    Weather (radiation and temperature) and crop characteristics determine the potential production of an irrigated rice crop. Daily weather data are important inputs to ORYZA 1, an eco-physiological crop model. However, in most cases, missing values occur and sometimes daily weather data are not readily available. More than 20 years of historic daily weather data had been collected from six stations in the Philippines -- Albay, Butuan, Munoz, Batac, Aborlan, and Los Banos. Methods to estimate daily weather data values were made by deriving long-term monthly means and (1) using the same value per month, (2) linearly interpolating between months, and (3) using SIMMETEO weather generator. A validated ORYZA 1 was run using actual daily weather data. The model was run again using weather data obtained from each estimation procedure and the predicted yields from the different simulation runs were compared. The yield predicted using the different weather data sets for each site difference by as much as 20 percent. Among the three estimation procedures used, the interpolated monthly mean values of weather data gave results comparable with those of model runs using actual weather data

  10. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  11. Algebraic modeling and thermodynamic design of fan-supplied tube-fin evaporators running under frosting conditions

    International Nuclear Information System (INIS)

    Ribeiro, Rafael S.; Hermes, Christian J.L.

    2014-01-01

    In this study, the method of entropy generation minimization (i.e., design aimed at facilitating both heat, mass and fluid flows) is used to assess the evaporator design (aspect ratio and fin density) considering the thermodynamic losses due to heat and mass transfer, and viscous flow processes. A fully algebraic model was put forward to simulate the thermal-hydraulic behavior of tube-fin evaporator coils running under frosting conditions. The model predictions were validated against experimental data, showing a good agreement between calculated and measured counterparts. The optimization exercise has pointed out that high aspect ratio heat exchanger designs lead to lower entropy generation in cases of fixed cooling capacity and air flow rate constrained by the characteristic curve of the fan. - Highlights: • An algebraic model for frost accumulation on tube-fin heat exchangers was advanced. • Model predictions for cooling capacity and air flow rate were compared with experimental data, with errors within ±5% band. • Minimum entropy generation criterion was used to optimize the evaporator geometry. • Thermodynamic analysis led to slender designs for fixed cooling capacity and fan characteristics

  12. Minkowski space pion model inspired by lattice QCD running quark mass

    Directory of Open Access Journals (Sweden)

    Clayton S. Mello

    2017-03-01

    Full Text Available The pion structure in Minkowski space is described in terms of an analytic model of the Bethe–Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe–Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward–Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.

  13. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  14. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  15. Radionuclide transport in running waters, sensitivity analysis of bed-load, channel geometry and model discretisation

    International Nuclear Information System (INIS)

    Jonsson, Karin; Elert, Mark

    2006-08-01

    In this report, further investigations of the model concept for radionuclide transport in stream, developed in the SKB report TR-05-03 is presented. Especially three issues have been the focus of the model investigations. The first issue was to investigate the influence of assumed channel geometry on the simulation results. The second issue was to reconsider the applicability of the equation for the bed-load transport in the stream model, and finally the last issue was to investigate how the model discretisation will influence the simulation results. The simulations showed that there were relatively small differences in results when applying different cross-sections in the model. The inclusion of the exact shape of the cross-section in the model is therefore not crucial, however, if cross-sectional data exist, the overall shape of the cross-section should be used in the model formulation. This could e.g. be accomplished by using measured values of the stream width and depth in the middle of the stream and by assuming a triangular shape. The bed-load transport was in this study determined for different sediment characteristics which can be used as an order of magnitude estimation if no exact determinations of the bed-load are available. The difference in the calculated bed-load transport for the different materials was, however, found to be limited. The investigation of model discretisation showed that a fine model discretisation to account for numerical effects is probably not important for the performed simulations. However, it can be necessary for being able to account for different conditions along a stream. For example, the application of mean slopes instead of individual values in the different stream reaches can result in very different predicted concentrations

  16. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  17. Hippocampal serotonin-1A receptor function in a mouse model of anxiety induced by long-term voluntary wheel running.

    Science.gov (United States)

    Fuss, Johannes; Vogt, Miriam A; Weber, Klaus-Josef; Burke, Teresa F; Gass, Peter; Hensler, Julie G

    2013-10-01

    We have recently demonstrated that, in C57/Bl6 mice, long-term voluntary wheel running is anxiogenic, and focal hippocampal irradiation prevents the increase in anxiety-like behaviors and neurobiological changes in the hippocampus induced by wheel running. Evidence supports a role of hippocampal 5-HT1A receptors in anxiety. Therefore, we investigated hippocampal binding and function of 5-HT1A receptors in this mouse model of anxiety. Four weeks of voluntary wheel running resulted in hippocampal subregion-specific changes in 5-HT1A receptor binding sites and function, as measured by autoradiography of [(3) H] 8-hydroxy-2-(di-n-propylamino)tetralin binding and agonist-stimulated binding of [(35) S]GTPγS to G proteins, respectively. In the dorsal CA1 region, 5-HT1A receptor binding and function were not altered by wheel running or irradiation. In the dorsal dentate gyrus and CA2/3 region, 5-HT1A receptor function was decreased by not only running but also irradiation. In the ventral pyramidal layer, wheel running resulted in a decrease of 5-HT1A receptor function, which was prevented by irradiation. Neither irradiation nor wheel running affected 5-HT1A receptors in medial prefrontal cortex or in the dorsal or median raphe nuclei. Our data indicate that downregulation of 5-HT1A receptor function in ventral pyramidal layer may play a role in anxiety-like behavior induced by wheel running. Copyright © 2013 Wiley Periodicals, Inc.

  18. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  19. Crowd Human Behavior for Modeling and Simulation

    Science.gov (United States)

    2009-08-06

    Crowd Human Behavior for Modeling and Simulation Elizabeth Mezzacappa, Ph.D. & Gordon Cooke, MEME Target Behavioral Response Laboratory, ARDEC...TYPE Conference Presentation 3. DATES COVERED 00-00-2008 to 00-00-2009 4. TITLE AND SUBTITLE Crowd Human Behavior for Modeling and Simulation...34understanding human behavior " and "model validation and verification" and will focus on modeling and simulation of crowds from a social scientist???s

  20. Simulation Model of Mobile Detection Systems

    International Nuclear Information System (INIS)

    Edmunds, T.; Faissol, D.; Yao, Y.

    2009-01-01

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains

  1. Simulation Model for DMEK Donor Preparation.

    Science.gov (United States)

    Mittal, Vikas; Mittal, Ruchi; Singh, Swati; Narang, Purvasha; Sridhar, Priti

    2018-04-09

    To demonstrate a simulation model for donor preparation in Descemet membrane endothelial keratoplasty (DMEK). The inner transparent membrane of the onion (Allium cepa) was used as a simulation model for human Descemet membrane (DM). Surgical video (see Video, Supplemental Digital Content 1, http://links.lww.com/ICO/A663) demonstrating all the steps was recorded. This model closely simulates human DM and helps DMEK surgeons learn the nuances of DM donor preparation steps with ease. The technique is repeatable, and the model is cost-effective. The described simulation model can assist surgeons and eye bank technicians to learn steps in donor preparation in DMEK.

  2. Analysis of the traditional vehicle’s running cost and the electric vehicle’s running cost under car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Xu, Ke-Wei; Yang, Shi-Chun; Shang, Hua-Yan

    2016-03-01

    In this paper, we use car-following theory to study the traditional vehicle’s running cost and the electric vehicle’s running cost. The numerical results illustrate that the traditional vehicle’s running cost is larger than that of the electric vehicle and that the system’s total running cost drops with the increase of the electric vehicle’s proportion, which shows that the electric vehicle is better than the traditional vehicle from the perspective of the running cost.

  3. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  4. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  5. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  6. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  7. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  8. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    Science.gov (United States)

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P < 0.001] and Össur Flex-Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P < 0.001) averaged from both legs. Metabolic cost was reduced with more symmetric peak vertical ground reaction forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry ( P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for

  9. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  10. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  11. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  12. RG running in a minimal UED model in light of recent LHC Higgs mass bounds

    International Nuclear Information System (INIS)

    Blennow, Mattias; Melbéus, Henrik; Ohlsson, Tommy; Zhang, He

    2012-01-01

    We study how the recent ATLAS and CMS Higgs mass bounds affect the renormalization group running of the physical parameters in universal extra dimensions. Using the running of the Higgs self-coupling constant, we derive bounds on the cutoff scale of the extra-dimensional theory itself. We show that the running of physical parameters, such as the fermion masses and the CKM mixing matrix, is significantly restricted by these bounds. In particular, we find that the running of the gauge couplings cannot be sufficient to allow gauge unification at the cutoff scale.

  13. Stereological Study on the Positive Effect of Running Exercise on the Capillaries in the Hippocampus in a Depression Model

    Directory of Open Access Journals (Sweden)

    Linmu Chen

    2017-11-01

    Full Text Available Running exercise is an effective method to improve depressive symptoms when combined with drugs. However, the underlying mechanisms are not fully clear. Cerebral blood flow perfusion in depressed patients is significantly lower in the hippocampus. Physical activity can achieve cerebrovascular benefits. The purpose of this study was to evaluate the impacts of running exercise on capillaries in the hippocampal CA1 and dentate gyrus (DG regions. The chronic unpredictable stress (CUS depression model was used in this study. CUS rats were given 4 weeks of running exercise from the fifth week to the eighth week (20 min every day from Monday to Friday each week. The sucrose consumption test was used to measure anhedonia. Furthermore, stereological methods were used to investigate the capillary changes among the control group, CUS/Standard group and CUS/Running group. Sucrose consumption significantly increased in the CUS/Running group. Running exercise has positive effects on the capillaries parameters in the hippocampal CA1 and DG regions, such as the total volume, total length and total surface area. These results demonstrated that capillaries are protected by running exercise in the hippocampal CA1 and DG might be one of the structural bases for the exercise-induced treatment of depression-like behavior. These results suggest that drugs and behavior influence capillaries and may be considered as a new means for depression treatment in the future.

  14. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    Science.gov (United States)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  15. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  16. Debris flow run off simulation and verification ‒ case study of Chen-You-Lan Watershed, Taiwan

    Directory of Open Access Journals (Sweden)

    M.-L. Lin

    2005-01-01

    Full Text Available In 1996 typhoon Herb struck the central Taiwan area, causing severe debris flow in many subwatersheds of the Chen-You-Lan river watershed. More severe cases of debris flow occurred following Chi-Chi earthquake, 1999. In order to identify the potentially affected area and its severity, the ability to simulate the flow route of debris is desirable. In this research numerical simulation of debris flow deposition process had been carried out using FLO-2D adopting Chui-Sue river watershed as the study area. Sensitivity study of parameters used in the numerical model was conducted and adjustments were made empirically. The micro-geomorphic database of Chui-Sue river watershed was generated and analyzed to understand the terrain variations caused by the debris flow. Based on the micro-geomorphic analysis, the debris deposition in the Chui-Sue river watershed in the downstream area, and the position and volume of debris deposition were determined. The simulated results appeared to agree fairly well with the results of micro-geomorphic study of the area when not affected by other inflow rivers, and the trends of debris distribution in the study area appeared to be fairly consistent.

  17. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    Science.gov (United States)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  18. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  19. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. VHDL simulation with access to transistor models

    Science.gov (United States)

    Gibson, J.

    1991-01-01

    Hardware description languages such as VHDL have evolved to aid in the design of systems with large numbers of elements and a wide range of electronic and logical abstractions. For high performance circuits, behavioral models may not be able to efficiently include enough detail to give designers confidence in a simulation's accuracy. One option is to provide a link between the VHDL environment and a transistor level simulation environment. The coupling of the Vantage Analysis Systems VHDL simulator and the NOVA simulator provides the combination of VHDL modeling and transistor modeling.

  1. Policy advice derived from simulation models

    NARCIS (Netherlands)

    Brenner, T.; Werker, C.

    2009-01-01

    When advising policy we face the fundamental problem that economic processes are connected with uncertainty and thus policy can err. In this paper we show how the use of simulation models can reduce policy errors. We suggest that policy is best based on socalled abductive simulation models, which

  2. A Lookahead Behavior Model for Multi-Agent Hybrid Simulation

    Directory of Open Access Journals (Sweden)

    Mei Yang

    2017-10-01

    Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.

  3. Model Validation for Simulations of Vehicle Systems

    Science.gov (United States)

    2012-08-01

    jackknife”, Annals of Statistics, 7:1-26, 1979. [45] B. Efron and G. Gong, “A leisurely look at the bootstrap, the jackknife, and cross-validation”, The...battery model developed in the Automotive Research Center, a US Army Center of Excellence for modeling and simulation of ground vehicle systems...Sandia National Laboratories and a battery model developed in the Automotive Research Center, a US Army Center of Excellence for modeling and simulation

  4. Transient Modeling and Simulation of Compact Photobioreactors

    OpenAIRE

    Ribeiro, Robert Luis Lara; Mariano, André Bellin; Souza, Jeferson Avila; Vargas, Jose Viriato Coelho

    2008-01-01

    In this paper, a mathematical model is developed to make possible the simulation of microalgae growth and its dependency on medium temperature and light intensity. The model is utilized to simulate a compact photobioreactor response in time with physicochemical parameters of the microalgae Phaeodactylum tricornutum. The model allows for the prediction of the transient and local evolution of the biomass concentration in the photobioreactor with low computational time. As a result, the model is...

  5. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  6. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  7. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2014-02-01

    Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  8. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  9. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against a num...

  10. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  11. The General-Use Nodal Network Solver (GUNNS) Modeling Package for Space Vehicle Flow System Simulation

    Science.gov (United States)

    Harvey, Jason; Moore, Michael

    2013-01-01

    The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.

  12. Test and Evaluation of the Malicious Activity Simulation Tool (MAST) in a Local Area Network (LAN) Running the Common PC Operating System Environment (COMPOSE)

    Science.gov (United States)

    2013-09-01

    Compliant Center, “2011 Internet crime report,” National White Collar Crime Center, Glen Allen, VA. May 2012. [15] F. Paget, “Hacktivism,” McAfee, Inc...IC3 Internet Crime Complaint Center xiv IRC Internet Relay Chat IP Internet Protocol IPS Intrusion Prevention System IT Information Systems...able to run its simulations on the same network without 2 causing a reduction in the network’s operational readiness or availability. We discuss this

  13. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  14. Run-of-River Impoundments Can Remain Unfilled While Transporting Gravel Bedload: Numerical Modeling Results

    Science.gov (United States)

    Pearson, A.; Pizzuto, J. E.

    2015-12-01

    Previous work at run-of-river (ROR) dams in northern Delaware has shown that bedload supplied to ROR impoundments can be transported over the dam when impoundments remain unfilled. Transport is facilitated by high levels of sand in the impoundment that lowers the critical shear stresses for particle entrainment, and an inversely sloping sediment ramp connecting the impoundment bed (where the water depth is typically equal to the dam height) with the top of the dam (Pearson and Pizzuto, in press). We demonstrate with one-dimensional bed material transport modeling that bed material can move through impoundments and that equilibrium transport (i.e., a balance between supply to and export from the impoundment, with a constant bed elevation) is possible even when the bed elevation is below the top of the dam. Based on our field work and previous HEC-RAS modeling, we assess bed material transport capacity at the base of the sediment ramp (and ignore detailed processes carrying sediment up and ramp and over the dam). The hydraulics at the base of the ramp are computed using a weir equation, providing estimates of water depth, velocity, and friction, based on the discharge and sediment grain size distribution of the impoundment. Bedload transport rates are computed using the Wilcock-Crowe equation, and changes in the impoundment's bed elevation are determined by sediment continuity. Our results indicate that impoundments pass the gravel supplied from upstream with deep pools when gravel supply rate is low, gravel grain sizes are relatively small, sand supply is high, and discharge is high. Conversely, impoundments will tend to fill their pools when gravel supply rate is high, gravel grain sizes are relatively large, sand supply is low, and discharge is low. The rate of bedload supplied to an impoundment is the primary control on how fast equilibrium transport is reached, with discharge having almost no influence on the timing of equilibrium.

  15. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  16. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  17. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  18. High-resolution modeling of tsunami run-up flooding: a case study of flooding in Kamaishi city, Japan, induced by the 2011 Tohoku tsunami

    Directory of Open Access Journals (Sweden)

    R. Akoh

    2017-11-01

    Full Text Available Run-up processes of the 2011 Tohoku tsunami into the city of Kamaishi, Japan, were simulated numerically using 2-D shallow water equations with a new treatment of building footprints. The model imposes an internal hydraulic condition of permeable and impermeable walls at the building footprint outline on unstructured triangular meshes. Digital data of the building footprint approximated by polygons were overlaid on a 1.0 m resolution terrain model. The hydraulic boundary conditions were ascertained using conventional tsunami propagation calculation from the seismic center to nearshore areas. Run-up flow calculations were conducted under the same hydraulic conditions for several cases having different building permeabilities. Comparison of computation results with field data suggests that the case with a small amount of wall permeability gives better agreement than the case with impermeable condition. Spatial mapping of an indicator for run-up flow intensity (IF = (hU2max, where h and U respectively denote the inundation depth and flow velocity during the flood, shows fairly good correlation with the distribution of houses destroyed by flooding. As a possible mitigation measure, the influence of the buildings on the flow was assessed using a numerical experiment for solid buildings arrayed alternately in two lines along the coast. Results show that the buildings can prevent seawater from flowing straight to the city center while maintaining access to the sea.

  19. High-resolution modeling of tsunami run-up flooding: a case study of flooding in Kamaishi city, Japan, induced by the 2011 Tohoku tsunami

    Science.gov (United States)

    Akoh, Ryosuke; Ishikawa, Tadaharu; Kojima, Takashi; Tomaru, Mahito; Maeno, Shiro

    2017-11-01

    Run-up processes of the 2011 Tohoku tsunami into the city of Kamaishi, Japan, were simulated numerically using 2-D shallow water equations with a new treatment of building footprints. The model imposes an internal hydraulic condition of permeable and impermeable walls at the building footprint outline on unstructured triangular meshes. Digital data of the building footprint approximated by polygons were overlaid on a 1.0 m resolution terrain model. The hydraulic boundary conditions were ascertained using conventional tsunami propagation calculation from the seismic center to nearshore areas. Run-up flow calculations were conducted under the same hydraulic conditions for several cases having different building permeabilities. Comparison of computation results with field data suggests that the case with a small amount of wall permeability gives better agreement than the case with impermeable condition. Spatial mapping of an indicator for run-up flow intensity (IF = (hU2)max, where h and U respectively denote the inundation depth and flow velocity during the flood, shows fairly good correlation with the distribution of houses destroyed by flooding. As a possible mitigation measure, the influence of the buildings on the flow was assessed using a numerical experiment for solid buildings arrayed alternately in two lines along the coast. Results show that the buildings can prevent seawater from flowing straight to the city center while maintaining access to the sea.

  20. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  1. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  2. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    Science.gov (United States)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  3. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot

    International Nuclear Information System (INIS)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-01-01

    We report on the development of a robot’s dynamic locomotion based on a template which fits the robot’s natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order ‘template’ in a more complex ‘anchor’, the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion. (paper)

  4. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  5. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  6. Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Daniel [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). et al.

    2015-07-06

    One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.

  7. Physically based dynamic run-out modelling for quantitative debris flow risk assessment: a case study in Tresenda, northern Italy

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; Camera, C.; Van Westen, C.; Apuani, T.; Jetten, V.; Sterlacchini, S.

    2014-01-01

    Roč. 72, č. 3 (2014), s. 645-661 ISSN 1866-6280 Institutional support: RVO:67985891 Keywords : debris flow * FLO-2D * run-out * quantitative hazard and risk assessment * vulnerability * numerical modelling Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.765, year: 2014

  8. On the duality between long-run relations and common trends in the I(1) versus I(2) model

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1994-01-01

    Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test...

  9. An integrated model to assess critical rain fall thresholds for the critical run-out distances of debris flows

    NARCIS (Netherlands)

    van Asch, Th.W.J.|info:eu-repo/dai/nl/304839558; Tang, C.; Alkema, D.; Zhu, J.; Zhou, W.

    2013-01-01

    A dramatic increase in debris flows occurred in the years after the 2008 Wenchuan earthquake in SW China due to the deposition of loose co-seismic landslide material. This paper proposes a preliminary integrated model, which describes the relationship between rain input and debris flow run-out in

  10. SAPS simulation with GITM/UCLA-RCM coupled model

    Science.gov (United States)

    Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.

    2017-12-01

    Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.

  11. Improving Patient Access by Determining Appropriate Staff Mix in the Family Practice Clinic of Bayne-Jones Army Community Hospital at Fort Polk, Louisiana Using an Animated Computer Simulation Model

    National Research Council Canada - National Science Library

    David, R

    1997-01-01

    ... Practice Clinic in order to enhance patience satisfaction by increasing their access to care. This determination was made by developing, running, and analyzing a number of separate animated simulation models using MedModel Simulation Software...

  12. Modeling grain size adjustments in the downstream reach following run-of-river development

    Science.gov (United States)

    Fuller, Theodore K.; Venditti, Jeremy G.; Nelson, Peter A.; Palen, Wendy J.

    2016-04-01

    Disruptions to sediment supply continuity caused by run-of-river (RoR) hydropower development have the potential to cause downstream changes in surface sediment grain size which can influence the productivity of salmon habitat. The most common approach to understanding the impacts of RoR hydropower is to study channel changes in the years following project development, but by then, any impacts are manifest and difficult to reverse. Here we use a more proactive approach, focused on predicting impacts in the project planning stage. We use a one-dimensional morphodynamic model to test the hypothesis that the greatest risk of geomorphic change and impact to salmon habitat from a temporary sediment supply disruption exists where predevelopment sediment supply is high and project design creates substantial sediment storage volume. We focus on the potential impacts in the reach downstream of a powerhouse for a range of development scenarios that are typical of projects developed in the Pacific Northwest and British Columbia. Results indicate that increases in the median bed surface size (D50) are minor if development occurs on low sediment supply streams (<1 mm for supply rates 1 × 10-5 m2 s-1 or lower), and substantial for development on high sediment supply streams (8-30 mm for supply rates between 5.5 × 10-4 and 1 × 10-3 m2 s-1). However, high sediment supply streams recover rapidly to the predevelopment surface D50 (˜1 year) if sediment supply can be reestablished.

  13. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    Energy Technology Data Exchange (ETDEWEB)

    Norman, Matthew [Univ. of California, San Diego, CA (United States)

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  14. Running Away

    Science.gov (United States)

    ... the streets in the United States. Why Kids Run Away Remember how you felt the last time you got in ... how to express angry feelings without violence. Know how to calm yourself down after you're upset. Maybe you need to run around outside, listen to music, draw, or write ...

  15. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  16. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  17. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.

  18. Complex Simulation Model of Mobile Fading Channel

    Directory of Open Access Journals (Sweden)

    Tomas Marek

    2005-01-01

    Full Text Available In the mobile communication environment the mobile channel is the main limiting obstacle to reach the best performance of wireless system. Modeling of the radio channel consists of two basic fading mechanisms - Long-term fading and Short-term fading. The contribution deals with simulation of complex mobile radio channel, which is the channel with all fading components. Simulation model is based on Clarke-Gans theoretical model for fading channel and is developed in MATLAB environment. Simulation results have shown very good coincidence with theory. This model was developed for hybrid adaptation 3G uplink simulator (described in this issue during the research project VEGA - 1/0140/03.

  19. Simulation Model Development for Mail Screening Process

    National Research Council Canada - National Science Library

    Vargo, Trish; Marvin, Freeman; Kooistra, Scott

    2005-01-01

    STUDY OBJECTIVE: Provide decision analysis support to the Homeland Defense Business Unit, Special Projects Team, in developing a simulation model to help determine the most effective way to eliminate backlog...

  20. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  1. Simulation Tools Model Icing for Aircraft Design

    Science.gov (United States)

    2012-01-01

    the years from strictly a research tool to one used routinely by industry and other government agencies. Glenn contractor William Wright has been the architect of this development, supported by a team of researchers investigating icing physics, creating validation data, and ensuring development according to standard software engineering practices. The program provides a virtual simulation environment for determining where water droplets strike an airfoil in flight, what kind of ice would result, and what shape that ice would take. Users can enter geometries for specific, two-dimensional cross sections of an airfoil or other airframe surface and then apply a range of inputs - different droplet sizes, temperatures, airspeeds, and more - to model how ice would build up on the surface in various conditions. The program s versatility, ease of use, and speed - LEWICE can run through complex icing simulations in only a few minutes - have contributed to it becoming a popular resource in the aviation industry.

  2. Simulation data mapping in virtual cardiac model.

    Science.gov (United States)

    Jiquan, Liu; Jingyi, Feng; Duan, Huilong; Siping, Chen

    2004-01-01

    Although 3D heart and torso model with realistic geometry are basis of simulation computation in LFX virtual cardiac model, the simulation results are mostly output in 2D format. To solve such a problem and enhance the virtual reality of LFX virtual cardiac model, the methods of voxel mapping and vertex project mapping were presented. With these methods, excitation isochrone map (EIM) was mapped from heart model with realistic geometry to real visible man heart model, and body surface potential map (BSPM) was mapped from torso model with realistic geometry to real visible man body surface. By visualizing in the 4Dview, which is a real-time 3D medical image visualization platform, the visualization results of EIM and BSPM simulation data before and after mapping were also provided. According to the visualization results, the output format of EIM and BSPM simulation data of LFX virtual cardiac model were extended from 2D to 4D (spatio-temporal) and from cardiac model with realistic geometry to real cardiac model, and more realistic and effective simulation was achieved.

  3. Fully Adaptive Radar Modeling and Simulation Development

    Science.gov (United States)

    2017-04-01

    AFRL-RY-WP-TR-2017-0074 FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT Kristine L. Bell and Anthony Kellems Metron, Inc...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE I REPORT. Approved for public release; distribution unlimited. See additional restrictions...2017 4. TITLE AND SUBTITLE FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT 5a. CONTRACT NUMBER FA8650-16-M-1774 5b. GRANT NUMBER 5c

  4. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  5. CMB constraints on running non-Gaussianity

    OpenAIRE

    Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola

    2017-01-01

    We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}

  6. The Role of Model Configuration in Simulating Spring Final Warming

    Science.gov (United States)

    McDaniel, B.

    2017-12-01

    The author performs a study of the relation between stratospheric final warmings (SFWs) and the climatological flow of the stratosphere and boreal extratropical tropospheric circulation. In contrast to the climatological seasonal cycle, SFW events noticeably sharpen the annual weakening of high-latitude circumpolar westerlies in both the stratosphere and troposphere. SFW events provide a strong organizing influence upon the large-scale circulation of the stratosphere and troposphere during the period of spring onset and are an important feature to assess the ability of a model to accurately reproduce observed variability. The ability of state-of-the-art climate models to accurately represent the transition of the stratosphere is crucial in accurately simulating variability in the stratosphere and stratosphere-troposphere interactions. To assess the veracity of stratospheric simulations in current models, a suite of runs from different members of the CMIP5 experiment are analyzed. For each model, the average date of spring onset as well as other descriptive statistics are calculated as well as the composite evolution of zonal wind anomalies and temperature and geopotential heights as they propagate from the stratosphere down to the surface. These composite patterns are then compared with the canonical evolution as based on observations. These results are binned separately based on the stratospheric resolution of the model (so called high-top and low-top models) as well as the strength of the climatological wintertime polar vortex to identify biases present in different classes of the models.

  7. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  8. Challenges for Modeling and Simulation

    National Research Council Canada - National Science Library

    Johnson, James

    2002-01-01

    This document deals with modeling and stimulation. The strengths are study processes that rarely or never occur, evaluate a wide range of alternatives, generate new ideas, new concepts and innovative solutions...

  9. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    Science.gov (United States)

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system

  10. TransCom model simulations of hourly atmospheric CO2: Experimental overview and diurnal cycle results for 2002

    NARCIS (Netherlands)

    Law, R. M.; Peters, W.; Roedenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S. -J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.

    2008-01-01

    [1] A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model

  11. TransCom model simulations of hourly atmospheric CO2: Experimental overview and diurnal cycle results for 2002

    NARCIS (Netherlands)

    Law, R. M.; Peters, W.; RöDenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S.-J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.

    2008-01-01

    A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model

  12. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results...

  13. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  14. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  15. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  16. Water desalination price from recent performances: Modelling, simulation and analysis

    International Nuclear Information System (INIS)

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  17. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  18. A simulation model for football championships

    OpenAIRE

    Koning, Ruud H.; Koolhaas, Michael; Renes, Gusta

    2001-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like ‘which team had a lucky draw?’ or ‘what is the probability that two teams meet at some moment in the tournament?’. Input to the simulation/probability model are scoring intensities, that are estimated as a weighted average of goals scored. The model has been used in practice to write articles for the popular press, ...

  19. Dezenflasyon Sürecinde Türkiye’de Enflasyonun Uzun ve Kısa Dönem Dinamiklerinin Modellenmesi(Modelling The Long Run and The Short Run Dynamics of Inflation In The Disinflation Process In Turkey

    Directory of Open Access Journals (Sweden)

    Macide ÇİÇEK

    2005-01-01

    Full Text Available In this study, it is employed that Expectations-Augmented Philips Curve Model to investigate the link between inflation and unit labor costs, output gap (proxy for demand shocks, real exchange rate (proxy for supply shocks and price expectations for Turkey using monthly data from 2000:01 to 2004:12. The methodology employed in this paper uses unit root test, Johansen Cointegration Test to examine the existence of possible long run relationships among the variables included in the model and a single equation error correction model for the inflation equation estimated by OLS to examine the short run dynamics of inflation, respectively. It is find that in the long run, mark-up behaviour of output prices over unit labor costs is the main cause of inflation, real exchange rate has a rather big impact on reduced inflation and demand shocks don’t led to an increase in prices. The short run dynamics of the inflation equation indicate that supply shocks are the determinant of inflation in the short run. It is also find that exchange rate is the variable that trigger an inflation adjustment the most rapidly in the short run.

  20. Simulating radiocarbon in the ocean model of the FAMOUS GCM

    Science.gov (United States)

    Dentith, Jennifer; Ivanovic, Ruza; Gregoire, Lauren; Tindall, Julia; Robinson, Laura F.

    2017-04-01

    Carbon isotopes are often utilised as proxies for palaeoceanographic circulation. However, discrepancies exist in the interpretation of isotopes in geological archives. A powerful approach for improving our understanding of palaeodata is to directly simulate multiple isotopic tracer fields within complex numerical models, thereby enabling model output to be compared directly to observations rather than the more uncertain climatic interpretations. We added the radioactive isotope 14C to the ocean component of the FAMOUS atmosphere-ocean General Circulation Model to examine ocean circulation, the oceanic carbon cycle, and air-sea gas exchange. The abiotic 14C tracer field is calculated based on air-sea gas exchange, advection and radioactive decay. A 10,000 year spin-up simulation was run to allow 14C concentrations in the deep ocean to equilibrate. Here, we compare the modelled 14C distributions in both the pre- and post-bomb era to published 14C compilations. We also discuss methods for overcoming model drifts in the marine hydrological cycle and their impact on deep ocean circulation. The overall aim is to use the isotope-enabled model to investigate the 14C fingerprint of different states of overturning circulation and to reach a better understanding of changes in ocean circulation and the carbon cycle at the Last Glacial Maximum (21,000 years ago) and during the last deglaciation (21,000-11,000 years ago).

  1. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.

  2. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  3. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  4. Model Driven Development of Simulation Models : Defining and Transforming Conceptual Models into Simulation Models by Using Metamodels and Model Transformations

    NARCIS (Netherlands)

    Küçükkeçeci Çetinkaya, D.

    2013-01-01

    Modeling and simulation (M&S) is an effective method for analyzing and designing systems and it is of interest to scientists and engineers from all disciplines. This thesis proposes the application of a model driven software development approach throughout the whole set of M&S activities and it

  5. The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment

    Science.gov (United States)

    Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.

  6. Simulation and modeling of turbulent flows

    CERN Document Server

    Gatski, Thomas B; Lumley, John L

    1996-01-01

    This book provides students and researchers in fluid engineering with an up-to-date overview of turbulent flow research in the areas of simulation and modeling. A key element of the book is the systematic, rational development of turbulence closure models and related aspects of modern turbulent flow theory and prediction. Starting with a review of the spectral dynamics of homogenous and inhomogeneous turbulent flows, succeeding chapters deal with numerical simulation techniques, renormalization group methods and turbulent closure modeling. Each chapter is authored by recognized leaders in their respective fields, and each provides a thorough and cohesive treatment of the subject.

  7. Dynamic modeling and simulation of wind turbines

    International Nuclear Information System (INIS)

    Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.

    2002-01-01

    Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator

  8. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  9. The behaviour of adaptive boneremodeling simulation models

    NARCIS (Netherlands)

    Weinans, H.; Huiskes, R.; Grootenboer, H.J.

    1992-01-01

    The process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule applied to

  10. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  11. Equivalent drawbead model in finite element simulations

    NARCIS (Netherlands)

    Carleer, Bart D.; Carleer, B.D.; Meinders, Vincent T.; Huetink, Han; Lee, J.K.; Kinzel, G.L.; Wagoner, R.

    1996-01-01

    In 3D simulations of the deep drawing process the drawbead geometries are seldom included. Therefore equivalent drawbeads are used. In order to investigate the drawbead behaviour a 2D plane strain finite element model was used. For verification of this model experiments were performed. The analyses

  12. A simulation model for football championships

    NARCIS (Netherlands)

    Koning, RH; Koolhaas, M; Renes, G; Ridder, G

    2003-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like 'which team bad a lucky draw?' or 'what is the probability that two teams meet at some moment in the tournament?' Input

  13. A simulation model for football championships

    NARCIS (Netherlands)

    Koning, Ruud H.; Koolhaas, Michael; Renes, Gusta

    2001-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like ‘which team had a lucky draw?’ or ‘what is the probability that two teams meet at some moment in the tournament?’. Input

  14. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  15. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect inc...

  16. Search for the Trilepton Signal of the Minimal Supergravity Model in D0 Run II

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Meta [Munich Univ. (Germany)

    2005-06-01

    A search for associated chargino neutralino pair production is performed in the trilepton decay channel q$\\bar{q}$ → $\\tilde{Χ}$$±\\atop{1}$ $\\tilde{Χ}$$0\\atop{2}$ → ℓ± v $\\tilde{Χ}$$0\\atop{1}$ μ± μ± $\\tilde{Χ}$$0\\atop{1}$, using data collected with the D0 detector at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron Collider. The data sample corresponds to an integrated luminosity of ~300 pb-1. A dedicated event selection is applied to all samples including the data sample and the Monte Carlo simulated samples for the Standard Model background and the Supersymmetry signal. Events with two muons plus an additional isolated track, replacing the requirement of a third charged lepton in the event, are analyzed. Additionally, selected events must have a large amount of missing transverse energy due to the neutrino and the two $\\tilde{Χ}$$0\\atop{1}$. After all selection cuts are applied, 2 data events are found, with an expected number of background events of 1.75 ± 0.34 (stat.) ± 0.46 (syst.). No evidence for Supersymmetry is found and limits on the production cross section times leptonic branching fraction are set. When the presented analysis is considered in combination with three other decay channels, no evidence for Supersymmetry is found. Limits on the production cross section times leptonic branching fraction are set. A lower chargino mass limit of 117 GeV at 95% CL is then derived for the mSUGRA model in a region of parameter space with enhanced leptonic branching fractions.

  17. Voluntary Running Attenuates Memory Loss, Decreases Neuropathological Changes and Induces Neurogenesis in a Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Tapia-Rojas, Cheril; Aranguiz, Florencia; Varela-Nallar, Lorena; Inestrosa, Nibaldo C

    2016-01-01

    Alzheimer's disease (AD) is a neurodegenerative disorder characterized by loss of memory and cognitive abilities, and the appearance of amyloid plaques composed of the amyloid-β peptide (Aβ) and neurofibrillary tangles formed of tau protein. It has been suggested that exercise might ameliorate the disease; here, we evaluated the effect of voluntary running on several aspects of AD including amyloid deposition, tau phosphorylation, inflammatory reaction, neurogenesis and spatial memory in the double transgenic APPswe/PS1ΔE9 mouse model of AD. We report that voluntary wheel running for 10 weeks decreased Aβ burden, Thioflavin-S-positive plaques and Aβ oligomers in the hippocampus. In addition, runner APPswe/PS1ΔE9 mice showed fewer phosphorylated tau protein and decreased astrogliosis evidenced by lower staining of GFAP. Further, runner APPswe/PS1ΔE9 mice showed increased number of neurons in the hippocampus and exhibited increased cell proliferation and generation of cells positive for the immature neuronal protein doublecortin, indicating that running increased neurogenesis. Finally, runner APPswe/PS1ΔE9 mice showed improved spatial memory performance in the Morris water maze. Altogether, our findings indicate that in APPswe/PS1ΔE9 mice, voluntary running reduced all the neuropathological hallmarks of AD studied, reduced neuronal loss, increased hippocampal neurogenesis and reduced spatial memory loss. These findings support that voluntary exercise might have therapeutic value on AD. © 2015 International Society of Neuropathology.

  18. Landscape Modelling and Simulation Using Spatial Data

    Directory of Open Access Journals (Sweden)

    Amjed Naser Mohsin AL-Hameedawi

    2017-08-01

    Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.

  19. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  20. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-II analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This presentation will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  1. The Effect of Natural or Simulated Altitude Training on High-Intensity Intermittent Running Performance in Team-Sport Athletes: A Meta-Analysis.

    Science.gov (United States)

    Hamlin, Michael J; Lizamore, Catherine A; Hopkins, Will G

    2018-02-01

    While adaptation to hypoxia at natural or simulated altitude has long been used with endurance athletes, it has only recently gained popularity for team-sport athletes. To analyse the effect of hypoxic interventions on high-intensity intermittent running performance in team-sport athletes. A systematic literature search of five journal databases was performed. Percent change in performance (distance covered) in the Yo-Yo intermittent recovery test (level 1 and level 2 were used without differentiation) in hypoxic (natural or simulated altitude) and control (sea level or normoxic placebo) groups was meta-analyzed with a mixed model. The modifying effects of study characteristics (type and dose of hypoxic exposure, training duration, post-altitude duration) were estimated with fixed effects, random effects allowed for repeated measurement within studies and residual real differences between studies, and the standard-error weighting factors were derived or imputed via standard deviations of change scores. Effects and their uncertainty were assessed with magnitude-based inference, with a smallest important improvement of 4% estimated via between-athlete standard deviations of performance at baseline. Ten studies qualified for inclusion, but two were excluded owing to small sample size and risk of publication bias. Hypoxic interventions occurred over a period of 7-28 days, and the range of total hypoxic exposure (in effective altitude-hours) was 4.5-33 km h in the intermittent-hypoxia studies and 180-710 km h in the live-high studies. There were 11 control and 15 experimental study-estimates in the final meta-analysis. Training effects were moderate and very likely beneficial in the control groups at 1 week (20 ± 14%, percent estimate, ± 90% confidence limits) and 4-week post-intervention (25 ± 23%). The intermittent and live-high hypoxic groups experienced additional likely beneficial gains at 1 week (13 ± 16%; 13 ± 15%) and 4-week post

  2. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  3. Using Active Learning for Speeding up Calibration in Simulation Models.

    Science.gov (United States)

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  4. A queuing model for road traffic simulation

    International Nuclear Information System (INIS)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.

    2015-01-01

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme

  5. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  6. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  7. A simulation model for probabilistic analysis of Space Shuttle abort modes

    Science.gov (United States)

    Hage, R. T.

    1993-11-01

    A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.

  8. Global ice volume variations through the last glacial cycle simulated by a 3-D ice-dynamical model

    NARCIS (Netherlands)

    Bintanja, R.; Wal, R.S.W. van de; Oerlemans, J.

    2002-01-01

    A coupled ice sheet—ice shelf—bedrock model was run at 20km resolution to simulate the evolution of global ice cover during the last glacial cycle. The mass balance model uses monthly mean temperature and precipitation as input and incorporates the albedo—mass balance feedback. The model is forced

  9. Reduced Gasoline Surrogate (Toluene/n-Heptane/iso-Octane) Chemical Kinetic Model for Compression Ignition Simulations

    KAUST Repository

    Sarathy, Mani

    2018-04-03

    Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.

  10. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  11. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  12. Up and running with AutoCAD 2014 2D and 3D drawing and modeling

    CERN Document Server

    Gindis, Elliot

    2013-01-01

    Get ""Up and Running"" with AutoCAD using Gindis's combination of step-by-step instruction, examples, and insightful explanations. The emphasis from the beginning is on core concepts and practical application of AutoCAD in architecture, engineering and design. Equally useful in instructor-led classroom training, self-study, or as a professional reference, the book is written with the user in mind by a long-time AutoCAD professional and instructor based on what works in the industry and the classroom. Strips away complexities, both real and perceived, and reduces AutoCAD t

  13. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation?

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on a strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers. (orig.)

  14. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers

  15. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  16. Use of rainfall-simulator data in precipitation-runoff modeling studies

    Science.gov (United States)

    Lusby, G.C.; Lichty, R.W.

    1983-01-01

    Results of a study using a rainfall simulator to define infiltration parameters for use in watershed modeling are presented. A total of 23 rainfall-simulation runs were made on five small plots representing four representative soil-vegetation types of the study watershed in eastern Colorado. Data for three observed rainfall-runoff events were recorded by gages on four of the plots. Data from all events were used to develop best-fit parameters of the Green and Ampt infiltration equation. The hydraulic conductivity of the transmission zone, KSAT, grossly controlled the goodness of fit of all modeling attempts. Results of fitting KSAT to reproduce runoff from rainfall simulator runs and results of fitting KSAT to reproduce runoff from observed rainfall-runoff events are inconsistent. Variations in results from site to site and at different times of the year were observed. (USGS)

  17. Inverse dynamic modelling of jumping in the red-legged running frog,Kassina maculata.

    Science.gov (United States)

    Porro, Laura B; Collings, Amber J; Eberhard, Enrico A; Chadwick, Kyle P; Richards, Christopher T

    2017-05-15

    Although the red-legged running frog, Kassina maculata , is secondarily a walker/runner, it retains the capacity for multiple locomotor modes, including jumping at a wide range of angles (nearly 70 deg). Using simultaneous hind limb kinematics and single-foot ground reaction forces, we performed inverse dynamics analyses to calculate moment arms and torques about the hind limb joints during jumping at different angles in K. maculata. We show that forward thrust is generated primarily at the hip and ankle, while body elevation is primarily driven by the ankle. Steeper jumps are achieved by increased thrust at the hip and ankle and greater downward rotation of the distal limb segments. Because of its proximity to the GRF vector, knee posture appears to be important in controlling torque directions about this joint and, potentially, torque magnitudes at more distal joints. Other factors correlated with higher jump angles include increased body angle in the preparatory phase, faster joint openings and increased joint excursion, higher ventrally directed force, and greater acceleration and velocity. Finally, we demonstrate that jumping performance in K. maculata does not appear to be compromised by presumed adaptation to walking/running. Our results provide new insights into how frogs engage in a wide range of locomotor behaviours and the multi-functionality of anuran limbs. © 2017. Published by The Company of Biologists Ltd.

  18. Tsunami Simulators in Physical Modelling - Concept to Practical Solutions

    Science.gov (United States)

    Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David

    2017-04-01

    Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami

  19. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  20. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  1. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    Alejandro, R.; Udbinac, M.J.

    2006-01-01

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  2. The effect of two training models on the average changes in running speed in 2400m races

    Directory of Open Access Journals (Sweden)

    Bolas Nikolaos

    2014-01-01

    Full Text Available Running at an even pace is, in both physical and tactical aspect, an essential factor when achieving good results in middle and long distance races. The appropriate strategy for running a tactically effective race starts by selecting the optimal running speed. Two models of training lasting for six weeks were applied on the group of subjects (N=43 composed of students from the Faculty of Sport and Physical Education, University of Belgrade. The aim of the study was to determine how the applied models of training would affect the deviations of running speed from the mean values in 2400m races when running for the best result and also, how the applied models of training would affect the improvement of aerobic capacities, showed through maximal oxygen uptake. The analysis of the obtained results showed that no statistically significant differences in the average deviations of running speed from the mean values in 2400m races were recorded in any of the experimental groups either in the initial (G1=2.44±1.74 % and G2=1±0.75 % or the final measurements (G1=3.72±3.69 % and G2=4.57±3.63 %. Although there were no statistically significant differences after training stimulus in either final measurements, the subjects achieved better result, that is, they improved the running speed in the final (G1=4.12±0.48 m/s and G2=4.23±0.31 m/s as compared with the initial measurement (G1=3.7±0.36 m/s and G2=3.84±0.38 m/s. The results of the study showed that in both groups, there was a statistically significant improvement in the final measurement (G1=56.05±6.91 ml/kg/min and G2=59.55±6.95 ml/kg/min as compared to the initial measurement (G1=53.71±7.23 ml/kg/min and G2=54.58±6.49 ml/kg/min regarding the maximal oxygen uptake so that both training models have a significant effect on this variable. The results obtained could have a significant contribution when working with students and school population, assuming that in the lessons of theory and

  3. Crashworthiness Simulation of Front Bumper Model of MOROLIPI V2 During Head-on Collision

    Directory of Open Access Journals (Sweden)

    Nugraha Aditya Sukma

    2016-01-01

    Full Text Available It is necessary to conduct an impact test for bumper collision. The use of bumper as a protective components of a vehicle during collision. On this Paper, a crashworthiness simulation of front bumper model with correspond to the size of MOROLIPI V2 is conducted. The purpose of this study was to obtain simulation result used as a reference to predict mechanical behaviour of bumper due to collision. The Simulation result can be predicted deformation after collision, von misses stress criteria after collision with static dummy load. To simulate impact on bumper, ANSYS Explicit Dynamics is used. Simulations were run at three values of mobile robot speeds (5, 10 and 20 m/s. The simulation results also show contact force due to the collision, deformation, stress and internal energy of the bumper beam. It was known that the speed of the vehicle is the dominant parameter determine the results of the crashworthiness simulation.

  4. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    to the cylinder. Based on appropriate analysis the collected data has been analysed with the stream function theory to obtain the relevant parameters for the use of the predicted wave run-up formula. An analytical approach has been pursued and solved for individual waves. Maximum run-up and 2% run-up were studied......This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... a number of cases. Regular and freak waves have been generated in a numerical wave tank with a gentle slope in order to address the study of the wave run-up on a circular cylinder. From the computational side it can be said that it is inexpensive. Furthermore, the comparison of the current numerical model...

  5. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  6. TRANSFORM - TRANsient Simulation Framework of Reconfigurable Models

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-01

    Existing development tools for early stage design and scoping of energy systems are often time consuming to use, proprietary, and do not contain the necessary function to model complete systems (i.e., controls, primary, and secondary systems) in a common platform. The Modelica programming language based TRANSFORM tool (1) provides a standardized, common simulation environment for early design of energy systems (i.e., power plants), (2) provides a library of baseline component modules to be assembled into full plant models using available geometry, design, and thermal-hydraulic data, (3) defines modeling conventions for interconnecting component models, and (4) establishes user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  7. An Advanced HIL Simulation Battery Model for Battery Management System Testing

    DEFF Research Database (Denmark)

    Barreras, Jorge Varela; Fleischer, Christian; Christensen, Andreas Elkjær

    2016-01-01

    Developers and manufacturers of battery management systems (BMSs) require extensive testing of controller Hardware (HW) and Software (SW), such as analog front-end and performance of generated control code. In comparison with the tests conducted on real batteries, tests conducted on a state......-of-the-art hardware-in-the-loop (HIL) simulator can be more cost and time effective, easier to reproduce, and safer beyond the normal range of operation, especially at early stages in the development process or during fault insertion. In this paper, an HIL simulation battery model is developed for purposes of BMS...... testing on a commercial HIL simulator. A multicell electrothermal Li-ion battery (LIB) model is integrated in a system-level simulation. Then, the LIB system model is converted to C code and run in real time with the HIL simulator. Finally, in order to demonstrate the capabilities of the setup...

  8. Modelling erosion damage from low-energy plasma gun simulations of disruptions

    International Nuclear Information System (INIS)

    Ehst, D.A.; Hassanein, A.

    1993-10-01

    Energy transfer to material surfaces is dominated by photon radiation through low temperature plasma vapors if tokamak disruptions are due to low kinetic energy particles (approx-lt 100 eV). Simple models of radiation transport are derived and incorporated into a fast-running computer routine to model this process. The results of simulations are in good agreement with plasma gun erosion tests on several metal targets

  9. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  10. A universal simulator for ecological models

    DEFF Research Database (Denmark)

    Holst, Niels

    2013-01-01

    Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....

  11. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  12. preliminary multidomain modelling and simulation study

    African Journals Online (AJOL)

    user

    PRELIMINARY MULTIDOMAIN MODELLING AND SIMULATION STUDY OF A. HORIZONTAL AXIS WIND TURBINE (HAWT) TOWER VIBRATION. I. lliyasu1, I. Iliyasu2, I. K. Tanimu3 and D. O Obada4. 1,4 DEPARTMENT OF MECHANICAL ENGINEERING, AHMADU BELLO UNIVERSITY, ZARIA, KADUNA STATE. NIGERIA.

  13. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  14. A mathematical model for the simulation of thermal transients in the water loop of IPEN

    International Nuclear Information System (INIS)

    Pontedeiro, A.C.

    1980-01-01

    A mathematical model for simulation of thermal transients in the water loop at the Instituto de Pesquisas Energeticas e Nucleares, Sao Paulo, Brasil, is developed. The model is based on energy equations applied to the components of the experimental water loop. The non-linear system of first order diferencial equations and of non-linear algebraic equations obtained through the utilization of the IBM 'System/360-Continous System Modeling Program' (CSMP) is resolved. An optimization of the running time of the computer is made and a typical simulation of the water loop is executed. (Author) [pt

  15. Thermohydraulic modeling and simulation of breeder reactors

    International Nuclear Information System (INIS)

    Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.

    1982-01-01

    This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed

  16. A THC Simulator for Modeling Fluid-Rock Interactions

    Science.gov (United States)

    Hamidi, Sahar; Galvan, Boris; Heinze, Thomas; Miller, Stephen

    2014-05-01

    Fluid-rock interactions play an essential role in many earth processes, from a likely influence on earthquake nucleation and aftershocks, to enhanced geothermal system, carbon capture and storage (CCS), and underground nuclear waste repositories. In THC models, two-way interactions between different processes (thermal, hydraulic and chemical) are present. Fluid flow influences the permeability of the rock especially if chemical reactions are taken into account. On one hand solute concentration influences fluid properties while, on the other hand, heat can affect further chemical reactions. Estimating heat production from a naturally fractured geothermal systems remains a complex problem. Previous works are typically based on a local thermal equilibrium assumption and rarely consider the salinity. The dissolved salt in fluid affects the hydro- and thermodynamical behavior of the system by changing the hydraulic properties of the circulating fluid. Coupled thermal-hydraulic-chemical models (THC) are important for investigating these processes, but what is needed is a coupling to mechanics to result in THMC models. Although similar models currently exist (e.g. PFLOTRAN), our objective here is to develop algorithms for implementation using the Graphics Processing Unit (GPU) computer architecture to be run on GPU clusters. To that aim, we present a two-dimensional numerical simulation of a fully coupled non-isothermal non-reactive solute flow. The thermal part of the simulation models heat transfer processes for either local thermal equilibrium or nonequilibrium cases, and coupled to a non-reactive mass transfer described by a non-linear diffusion/dispersion model. The flow process of the model includes a non-linear Darcian flow for either saturated or unsaturated scenarios. For the unsaturated case, we use the Richards' approximation for a mixture of liquid and gas phases. Relative permeability and capillary pressure are determined by the van Genuchten relations

  17. Modeling and Simulation Tools: From Systems Biology to Systems Medicine.

    Science.gov (United States)

    Olivier, Brett G; Swat, Maciej J; Moné, Martijn J

    2016-01-01

    Modeling is an integral component of modern biology. In this chapter we look into the role of the model, as it pertains to Systems Medicine, and the software that is required to instantiate and run it. We do this by comparing the development, implementation, and characteristics of tools that have been developed to work with two divergent methodologies: Systems Biology and Pharmacometrics. From the Systems Biology perspective we consider the concept of "Software as a Medical Device" and what this may imply for the migration of research-oriented, simulation software into the domain of human health.In our second perspective, we see how in practice hundreds of computational tools already accompany drug discovery and development at every stage of the process. Standardized exchange formats are required to streamline the model exchange between tools, which would minimize translation errors and reduce the required time. With the emergence, almost 15 years ago, of the SBML standard, a large part of the domain of interest is already covered and models can be shared and passed from software to software without recoding them. Until recently the last stage of the process, the pharmacometric analysis used in clinical studies carried out on subject populations, lacked such an exchange medium. We describe a new emerging exchange format in Pharmacometrics which covers the non-linear mixed effects models, the standard statistical model type used in this area. By interfacing these two formats the entire domain can be covered by complementary standards and subsequently the according tools.

  18. REAL STOCK PRICES AND THE LONG-RUN MONEY DEMAND FUNCTION IN MALAYSIA: Evidence from Error Correction Model

    Directory of Open Access Journals (Sweden)

    Naziruddin Abdullah

    2004-06-01

    Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.

  19. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  20. CANDU 6 steam generator thermalhydraulic modeling and simulation

    International Nuclear Information System (INIS)

    Zahedi, P.; Borairi, M.

    2009-01-01

    between the plant and controller. Various simulation scenarios were run in order to verify the functionality of the resulting model and implementation. In addition, several tests were performed in order to evaluate the effectiveness of the existing controller and identify the deficiencies. These tests involved numerous operational conditions such as set-back and step-back and also took into account several potential disturbances. Finally, a new control strategy utilizing cascade controllers is developed and tested on the steam generator model. The proposed control algorithm shows an improved response in alleviating the adverse effect of the non-minimum phase characteristic of the plant model. (author)

  1. Advances in NLTE modeling for integrated simulations

    Science.gov (United States)

    Scott, H. A.; Hansen, S. B.

    2010-01-01

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different atomic species for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly-excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with sufficient accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, Δ n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short time steps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  2. Advances in NLTE Modeling for Integrated Simulations

    International Nuclear Information System (INIS)

    Scott, H.A.; Hansen, S.B.

    2009-01-01

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, Δn = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  3. Speeding up N -body simulations of modified gravity: chameleon screening models

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Sownak; Li, Baojiu; He, Jian-hua; Llinares, Claudio [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Barreira, Alexandre [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Hellwing, Wojciech A.; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Zhao, Gong-Bo, E-mail: sownak.bose@durham.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: barreira@mpa-garching.mpg.de, E-mail: jianhua.he@durham.ac.uk, E-mail: wojciech.hellwing@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: claudio.llinares@durham.ac.uk, E-mail: gbzhao@nao.cas.cn [National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012 (China)

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  4. ADHydro: A Parallel Implementation of a Large-scale High-Resolution Multi-Physics Distributed Water Resources Model Using the Charm++ Run Time System

    Science.gov (United States)

    Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.

    2014-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.

  5. SIMULATION MODELING OF IT PROJECTS BASED ON PETRI NETS

    Directory of Open Access Journals (Sweden)

    Александр Михайлович ВОЗНЫЙ

    2015-05-01

    Full Text Available An integrated simulation model of IT project based on a modified Petri net model that combines product and model of project tasks has been proposed. Substantive interpretation of the components of the simulation model has been presented, the process of simulation has been described. The conclusions about the integration of the product model and the model of works project were made.

  6. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  7. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  8. Numerical model simulation of atmospheric coolant plumes

    International Nuclear Information System (INIS)

    Gaillard, P.

    1980-01-01

    The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr

  9. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  10. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  11. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  12. Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator

    Science.gov (United States)

    Lewis, Emily K.; Vuong, Nghia D.

    2012-01-01

    This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.

  13. Software development infrastructure for the HYBRID modeling and simulation project

    Energy Technology Data Exchange (ETDEWEB)

    Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  14. A Simulation Model for Extensor Tendon Repair

    Directory of Open Access Journals (Sweden)

    Elizabeth Aronstam

    2017-07-01

    Full Text Available Audience: This simulation model is designed for use by emergency medicine residents. Although we have instituted this at the PGY-2 level of our residency curriculum, it is appropriate for any level of emergency medicine residency training. It might also be adapted for use for a variety of other learners, such as practicing emergency physicians, orthopedic surgery residents, or hand surgery trainees. Introduction: Tendon injuries commonly present to the emergency department, so it is essential that emergency physicians be competent in evaluating such injuries. Indeed, extensor tendon repair is included as an ACGME Emergency Medicine Milestone (Milestone 13, Wound Management, Level 5 – “Performs advanced wound repairs, such as tendon repairs…”.1 However, emergency medicine residents may have limited opportunity to develop these skills due to a lack of patients, competition from other trainees, or preexisting referral patterns. Simulation may provide an alternative means to effectively teach these skills in such settings. Previously described tendon repair simulation models that were designed for surgical trainees have used rubber worms4, licorice5, feeding tubes, catheters6,7, drinking straws8, microfoam tape9, sheep forelimbs10 and cadavers.11 These models all suffer a variety of limitations, including high cost, lack of ready availability, or lack of realism. Objectives: We sought to develop an extensor tendon repair simulation model for emergency medicine residents, designed to meet ACGME Emergency Medicine Milestone 13, Level 5. We wished this model to be simple, inexpensive, and realistic. Methods: The learner responsible content/educational handout component of our innovation teaches residents about emergency department extensor tendon repair, and includes: 1 relevant anatomy 2 indications and contraindications for emergency department extensor tendon repair 3 physical exam findings 4 tendon suture techniques and 5 aftercare. During

  15. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  16. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  17. Simulation as a surgical teaching model.

    Science.gov (United States)

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  19. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  20. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 1. The Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  1. Analysis of the Automobile Market : Modeling the Long-Run Determinants of the Demand for Automobiles : Volume 3. Appendices to the Wharton EFA Automobile Demand Model

    Science.gov (United States)

    1979-12-01

    An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...

  2. Facebook's personal page modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.

  3. Modeling and simulation of photovoltaic solar panel

    International Nuclear Information System (INIS)

    Belarbi, M.; Haddouche, K.; Midoun, A.

    2006-01-01

    In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit, open circuit, and maximum power. In the first step, the adopted approach concerns the resolution of the system of equations constituting the three operating points to write all the model parameters according to series resistance. Secondly, we make an iterative resolution at the optimal operating point by using the Newton-Raphson method to calculate the series resistance value as well as the model parameters. Once the panel model is identified, we consider other equations for taking into account the irradiance and temperature effect. The simulation results show the convergence speed of the model parameters and the possibility of visualizing the electrical behaviour of the panel according to the irradiance and temperature. Let us note that a sensitivity of the algorithm at the optimal operating point was observed owing to the fact that a small variation of the optimal voltage value leads to a very great variation of the identified parameters values. With the identified model, we can develop algorithms of maximum power point tracking, and make simulations of a solar water pumping system.(Author)

  4. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  5. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  6. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  7. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  8. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  9. eShopper modeling and simulation

    Science.gov (United States)

    Petrushin, Valery A.

    2001-03-01

    The advent of e-commerce gives an opportunity to shift the paradigm of customer communication into a highly interactive mode. The new generation of commercial Web servers, such as the Blue Martini's server, combines the collection of data on a customer behavior with real-time processing and dynamic tailoring of a feedback page. The new opportunities for direct product marketing and cross selling are arriving. The key problem is what kind of information do we need to achieve these goals, or in other words, how do we model the customer? The paper is devoted to customer modeling and simulation. The focus is on modeling an individual customer. The model is based on the customer's transaction data, click stream data, and demographics. The model includes the hierarchical profile of a customer's preferences to different types of products and brands; consumption models for the different types of products; the current focus, trends, and stochastic models for time intervals between purchases; product affinity models; and some generalized features, such as purchasing power, sensitivity to advertising, price sensitivity, etc. This type of model is used for predicting the date of the next visit, overall spending, and spending for different types of products and brands. For some type of stores (for example, a supermarket) and stable customers, it is possible to forecast the shopping lists rather accurately. The forecasting techniques are discussed. The forecasting results can be used for on- line direct marketing, customer retention, and inventory management. The customer model can also be used as a generative model for simulating the customer's purchasing behavior in different situations and for estimating customer's features.

  10. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general ... in the advisory service. Keywords: agriculture; botany; computer simulation; modelling; simulation model; simulation modelling; south africa; techniques ...

  11. Running multilevel models in MLwiN from within Stata: runmlwin

    OpenAIRE

    George Leckie; Chris Charlton

    2011-01-01

    Multilevel analysis is the statistical modeling of hierarchical and nonhierarchical clustered data. These data structures are common in social and medical sciences. Stata provides the xtmixed, xtmelogit, and xtmepoisson commands for fitting multilevel models, but these are only relevant for univariate continuous, binary, and count response variables, respectively. A much wider range of multilevel models can be fit using the user-written gllamm command, but gllamm can be computationally slow f...

  12. The short-run dynamics of optimal growyh models with delays

    OpenAIRE

    Collard, Fabrice; Licandro, Omar; Puch, Luis A.

    2003-01-01

    Differential equations with advanced and delayed time arguments may arise in the optimality conditions of simple growth models with delays. Models with investment gestation lags (time-to-build), consumption gestation lags (habit formation) or learning by using lie in this category. In this paper, we propose a shooting method to deal with leads and lags in the Euler system associated to dynamic general equilibrium models in continuous time. We introduce the discussion describing the dynamic...

  13. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  14. A Placement Model for Flight Simulators.

    Science.gov (United States)

    1982-09-01

    simulator basing strategies. Captains David R. VanDenburg and Jon D. Veith developed a mathematical model to assist in the placement analysis of A-7...Institute for Defense Analysis, Arlington VA, August 1977. AD A049979. 23. Sugarman , Robert C., Steven L. Johnson, and William F. H. Ring. "B-I Systems...USAF Cost and Plan- nin& Factors. AFR 173-13. Washington: Govern- ment Printing Office, I February 1982. * 30. Van Denburg, Captain David R., USAF

  15. Fuzzy rule-based macroinvertebrate habitat suitability models for running waters

    NARCIS (Netherlands)

    Broekhoven, Van E.; Adriaenssens, V.; Baets, De B.; Verdonschot, P.F.M.

    2006-01-01

    A fuzzy rule-based approach was applied to a macroinvertebrate habitat suitability modelling problem. The model design was based on a knowledge base summarising the preferences and tolerances of 86 macroinvertebrate species for four variables describing river sites in springs up to small rivers in

  16. The Monetary Exchange Rate Model as a Long-Run Phenomenon

    NARCIS (Netherlands)

    J.J.J. Groen (Jan)

    1998-01-01

    textabstractPure time series-based tests fail to find empirical support for monetary exchange rate models. In this paper we apply pooled time series estimation on a forward-looking monetary model, resulting in parameter estimates which are in compliance with the underlying theory. Based on a panel

  17. Fate of pesticides in field ditches: the TOXSWA simulation model

    NARCIS (Netherlands)

    Adriaanse, P.I.

    1996-01-01

    The TOXSWA model describes the fate of pesticides entering field ditches by spray drift, atmospheric deposition, surface run-off, drainage or leaching. It considers four processes: transport, transformation, sorption and volatilization. Analytical andnumerical solutions corresponded well. A sample

  18. Modelling interplanetary CMEs using magnetohydrodynamic simulations

    Directory of Open Access Journals (Sweden)

    P. J. Cargill

    Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.

    Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies

  19. Evaluation of air gap membrane distillation process running under sub-atmospheric conditions: Experimental and simulation studies

    KAUST Repository

    Alsaadi, Ahmad S.

    2015-04-16

    The importance of removing non-condensable gases from air gap membrane distillation (AGMD) modules in improving the water vapor flux is presented in this paper. Additionally, a previously developed AGMD mathematical model is used to predict to the degree of flux enhancement under sub-atmospheric pressure conditions. Since the mathematical model prediction is expected to be very sensitive to membrane distillation (MD) membrane resistance when the mass diffusion resistance is eliminated, the permeability of the membrane was carefully measured with two different methods (gas permeance test and vacuum MD permeability test). The mathematical model prediction was found to highly agree with the experimental data, which showed that the removal of non-condensable gases increased the flux by more than three-fold when the gap pressure was maintained at the saturation pressure of the feed temperature. The importance of staging the sub-atmospheric AGMD process and how this could give better control over the gap pressure as the feed temperature decreases are also highlighted in this paper. The effect of staging on the sub-atmospheric AGMD flux and its relation to membrane capital cost are briefly discussed.

  20. MODELING AND SIMULATION OF A HYDROCRACKING UNIT

    Directory of Open Access Journals (Sweden)

    HASSAN A. FARAG

    2016-06-01

    Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.

  1. Reactive transport models and simulation with ALLIANCES

    International Nuclear Information System (INIS)

    Leterrier, N.; Deville, E.; Bary, B.; Trotignon, L.; Hedde, T.; Cochepin, B.; Stora, E.

    2009-01-01

    Many chemical processes influence the evolution of nuclear waste storage. As a result, simulations based only upon transport and hydraulic processes fail to describe adequately some industrial scenarios. We need to take into account complex chemical models (mass action laws, kinetics...) which are highly non-linear. In order to simulate the coupling of these chemical reactions with transport, we use a classical Sequential Iterative Approach (SIA), with a fixed point algorithm, within the mainframe of the ALLIANCES platform. This approach allows us to use the various transport and chemical modules available in ALLIANCES, via an operator-splitting method based upon the structure of the chemical system. We present five different applications of reactive transport simulations in the context of nuclear waste storage: 1. A 2D simulation of the lixiviation by rain water of an underground polluted zone high in uranium oxide; 2. The degradation of the steel envelope of a package in contact with clay. Corrosion of the steel creates corrosion products and the altered package becomes a porous medium. We follow the degradation front through kinetic reactions and the coupling with transport; 3. The degradation of a cement-based material by the injection of an aqueous solution of zinc and sulphate ions. In addition to the reactive transport coupling, we take into account in this case the hydraulic retroaction of the porosity variation on the Darcy velocity; 4. The decalcification of a concrete beam in an underground storage structure. In this case, in addition to the reactive transport simulation, we take into account the interaction between chemical degradation and the mechanical forces (cracks...), and the retroactive influence on the structure changes on transport; 5. The degradation of the steel envelope of a package in contact with a clay material under a temperature gradient. In this case the reactive transport simulation is entirely directed by the temperature changes and

  2. Simulation models generator. Applications in scheduling

    Directory of Open Access Journals (Sweden)

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  3. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  4. Modeling and simulation of reactive flows

    CERN Document Server

    Bortoli, De AL; Pereira, Felipe

    2015-01-01

    Modelling and Simulation of Reactive Flows presents information on modeling and how to numerically solve reactive flows. The book offers a distinctive approach that combines diffusion flames and geochemical flow problems, providing users with a comprehensive resource that bridges the gap for scientists, engineers, and the industry. Specifically, the book looks at the basic concepts related to reaction rates, chemical kinetics, and the development of reduced kinetic mechanisms. It considers the most common methods used in practical situations, along with equations for reactive flows, and va

  5. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  6. TMS modeling toolbox for realistic simulation.

    Science.gov (United States)

    Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong

    2010-01-01

    Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.

  7. Coupled Global-Regional Climate Model Simulations of Future Changes in Hydrology over Central America

    Science.gov (United States)

    Oglesby, R. J.; Erickson, D. J.; Hernandez, J. L.; Irwin, D.

    2005-12-01

    Central America covers a relatively small area, but is topographically very complex, has long coast-lines, large inland bodies of water, and very diverse land cover which is both natural and human-induced. As a result, Central America is plagued by hydrologic extremes, especially major flooding and drought events, in a region where many people still barely manage to eke out a living through subsistence. Therefore, considerable concern exists about whether these extreme events will change, either in magnitude or in number, as climate changes in the future. To address this concern, we have used global climate model simulations of future climate change to drive a regional climate model centered on Central America. We use the IPCC `business as usual' scenario 21st century run made with the NCAR CCSM3 global model to drive the regional model MM5 at 12 km resolution. We chose the `business as usual' scenario to focus on the largest possible changes that are likely to occur. Because we are most interested in near-term changes, our simulations are for the years 2010, 2015, and 2025. A long `present-day run (for 2005) allows us to distinguish between climate variability and any signal due to climate change. Furthermore, a multi-year run with MM5 forced by NCEP reanalyses allows an assessment of how well the coupled global-regional model performs over Central America. Our analyses suggest that the coupled model does a credible job simulating the current climate and hydrologic regime, though lack of sufficient observations strongly complicates this comparison. The suite of model runs for the future years is currently nearing completion, and key results will be presented at the meeting.

  8. Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model

    Science.gov (United States)

    2016-11-01

    contaminant concentration fields resulting from atmospheric boundary layer depth uncertainty. J App Meteo Clim. 2014;53:2610–2626. Skamarock WC...Center for Atmospheric Research, which has developed a suite of Model Evaluation Tools (MET) to evaluate the accuracy of WRF forecasts. In this...Weather Impacts Decision Aid. WRF is maintained by the National Center for Atmospheric Research, which has developed a suite of Model Evaluation Tools

  9. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  10. Development of Aspen: A microanalytic simulation model of the US economy

    Energy Technology Data Exchange (ETDEWEB)

    Pryor, R.J.; Basu, N.; Quint, T.

    1996-02-01

    This report describes the development of an agent-based microanalytic simulation model of the US economy. The microsimulation model capitalizes on recent technological advances in evolutionary learning and parallel computing. Results are reported for a test problem that was run using the model. The test results demonstrate the model`s ability to predict business-like cycles in an economy where prices and inventories are allowed to vary. Since most economic forecasting models have difficulty predicting any kind of cyclic behavior. These results show the potential of microanalytic simulation models to improve economic policy analysis and to provide new insights into underlying economic principles. Work already has begun on a more detailed model.

  11. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    , that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...

  12. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  13. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  14. Impact of treadmill running and sex on hippocampal neurogenesis in the mouse model of amyotrophic lateral sclerosis.

    Directory of Open Access Journals (Sweden)

    Xiaoxing Ma

    Full Text Available Hippocampal neurogenesis in the subgranular zone (SGZ of dentate gyrus (DG occurs throughout life and is regulated by pathological and physiological processes. The role of oxidative stress in hippocampal neurogenesis and its response to exercise or neurodegenerative diseases remains controversial. The present study was designed to investigate the impact of oxidative stress, treadmill exercise and sex on hippocampal neurogenesis in a murine model of heightened oxidative stress (G93A mice. G93A and wild type (WT mice were randomized to a treadmill running (EX or a sedentary (SED group for 1 or 4 wk. Immunohistochemistry was used to detect bromodeoxyuridine (BrdU labeled proliferating cells, surviving cells, and their phenotype, as well as for determination of oxidative stress (3-NT; 8-OHdG. BDNF and IGF1 mRNA expression was assessed by in situ hybridization. Results showed that: (1 G93A-SED mice had greater hippocampal neurogenesis, BDNF mRNA, and 3-NT, as compared to WT-SED mice. (2 Treadmill running promoted hippocampal neurogenesis and BDNF mRNA content and lowered DNA oxidative damage (8-OHdG in WT mice. (3 Male G93A mice showed significantly higher cell proliferation but a lower level of survival vs. female G93A mice. We conclude that G93A mice show higher hippocampal neurogenesis, in association with higher BDNF expression, yet running did not further enhance these phenomena in G93A mice, probably due to a 'ceiling effect' of an already heightened basal levels of hippocampal neurogenesis and BDNF expression.

  15. nIFTy galaxy cluster simulations II: radiative models

    CSIR Research Space (South Africa)

    Sembolini, F

    2016-04-01

    Full Text Available -radiative simulations, dark matter is more centrally concentrated, the extent not simply depending on the presence/absence of AGN feedback. The scatter in global quantities is substantially higher than for non-radiative runs. Intriguingly, adding radiative physics seems...

  16. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code

  17. Best Practices for Crash Modeling and Simulation

    Science.gov (United States)

    Fasanella, Edwin L.; Jackson, Karen E.

    2002-01-01

    Aviation safety can be greatly enhanced by the expeditious use of computer simulations of crash impact. Unlike automotive impact testing, which is now routine, experimental crash tests of even small aircraft are expensive and complex due to the high cost of the aircraft and the myriad of crash impact conditions that must be considered. Ultimately, the goal is to utilize full-scale crash simulations of aircraft for design evaluation and certification. The objective of this publication is to describe "best practices" for modeling aircraft impact using explicit nonlinear dynamic finite element codes such as LS-DYNA, DYNA3D, and MSC.Dytran. Although "best practices" is somewhat relative, it is hoped that the authors' experience will help others to avoid some of the common pitfalls in modeling that are not documented in one single publication. In addition, a discussion of experimental data analysis, digital filtering, and test-analysis correlation is provided. Finally, some examples of aircraft crash simulations are described in several appendices following the main report.

  18. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  19. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  20. Closed loop models for analyzing engineering requirements for simulators

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Kleinman, D.

    1980-01-01

    A closed loop analytic model, incorporating a model for the human pilot, (namely, the optimal control model) that would allow certain simulation design tradeoffs to be evaluated quantitatively was developed. This model was applied to a realistic flight control problem. The resulting model is used to analyze both overall simulation effects and the effects of individual elements. The results show that, as compared to an ideal continuous simulation, the discrete simulation can result in significant performance and/or workload penalties.

  1. Measuring Short- and Long-run Promotional Effectiveness on Scanner Data Using Persistence Modeling

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique); V.R. Nijs; J-B.E.M. Steenkamp (Jan-Benedict)

    2003-01-01

    textabstractThe use of price promotions to stimulate brand and firm performance is increasing. We discuss how (i) the availability of longer scanner data time series, and (ii) persistence modeling, have lead to greater insights into the dynamic effects of price promotions, as one can now quantify

  2. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Energy Technology Data Exchange (ETDEWEB)

    Meloni, Davide [Dipartimento di Matematica e Fisica, Università di Roma Tre,Via della Vasca Navale 84, 00146 Rome (Italy); Ohlsson, Tommy; Riad, Stella [Department of Physics, School of Engineering Sciences,KTH Royal Institute of Technology - AlbaNova University Center,Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-03-08

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10{sub H}, 120{sub H}, and 126{sub H} representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M{sub I}. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10{sub H} and 126{sub H} representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  3. Hybrid simulation models for data-intensive systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00473067

    Data-intensive systems are used to access and store massive amounts of data by combining the storage resources of multiple data-centers, usually deployed all over the world, in one system. This enables users to utilize these massive storage capabilities in a simple and efficient way. However, with the growth of these systems it becomes a hard problem to estimate the effects of modifications to the system, such as data placement algorithms or hardware upgrades, and to validate these changes for potential side effects. This thesis addresses the modeling of operational data-intensive systems and presents a novel simulation model which estimates the performance of system operations. The running example used throughout this thesis is the data-intensive system Rucio, which is used as the data man- agement system of the ATLAS experiment at CERN’s Large Hadron Collider. Existing system models in literature are not applicable to data-intensive workflows, as they only consider computational workflows or make assumpti...

  4. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    Science.gov (United States)

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  5. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  6. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    Science.gov (United States)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.

  7. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  8. PENGEMBANGAN MODEL PEMBELAJARAN PERMAINAN COLORFUL BALLS RUN UNTUK REAKSI GERAK PADA ANAK TUNAGARHITA DI SLB NEGERI SEMARANG TAHUN 2015

    Directory of Open Access Journals (Sweden)

    Rahadian Yodha Bhakti

    2016-02-01

    Full Text Available The purpose of this study was to determine the products of The Development of Learning Colorful Balls Run for Motion Reaction of mentally disabled children of SLB Negeri Semarang grade V in the academic year of 2015.. This research is the development (research and development / R & D, which consists of 10 steps of research, namely the potential and problems, data collection, product design, design validation, design revisions, test products, product revision, trial use, testing products, mass production Because the average obtained from the experts of physical education teacher 80% (good and from learning experts gained 92% (very good. The results of trial I product of small group on cognitive aspects was 83.53% (good, affective aspects was 82.10% (good, psychomotor aspects was 81.39% (good, the average of trial I was 82.34% (good. The results of trial II, the large group in the cognitive aspects was 85.14% (good affective aspects was 83.76% (good psychomotor aspects was 83.07% (good, the average of trial II was 83.99% (good. It was concluded that the development of colorful balls run game model can be used as an alternative to learn sport especially small ball game for V graders of SLB Negeri Semarang.

  9. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1.

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard [Formula: see text] and [Formula: see text] gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. Taking into account the constraints from [Formula: see text] processes, significant departures from the SM predictions for [Formula: see text] and [Formula: see text] are possible, while the effects in B decays are much smaller. In particular, the LHT model favours [Formula: see text], which is not supported by the data, and the present anomalies in [Formula: see text] decays cannot be explained in this model. With the recent lattice and large N input the imposition of the [Formula: see text] constraint implies a significant suppression of the branching ratio for [Formula: see text] with respect to its SM value while allowing only for small modifications of [Formula: see text]. Finally, we investigate how the LHT physics could be distinguished from other models by means of indirect measurements and

  10. A Case Study of Low-Level Jets in Yerevan Simulated by the WRF Model

    Science.gov (United States)

    Gevorgyan, Artur

    2018-01-01

    Capabilities of high-resolution (3 km) Weather Research and Forecasting (WRF) simulations to reproduce topographically induced mountain-valley winds and low-level jets (LLJs) in Yerevan have been evaluated using high-frequency observational and modeled data. High sensitivities of simulations of near-surface winds and LLJ characteristics observed on 4 July 2015 to both boundary layer and initial and lateral boundary conditions setup have been demonstrated. Among the nine tested planetary boundary layer (PBL) parameterization schemes the MYJ, QNSE, and TEMF PBL schemes showed greater skill in simulation of near-surface valley winds over Yerevan, while the other PBL schemes tend to significantly underestimate the strength of valley winds, with the BouLac PBL scheme being the worst performer. Most of PBL schemes simulate well-defined LLJs in Yerevan associated with evening valley winds. The simulated jet cores are mostly located between 150 and 250 m above ground with magnitudes varying from 12 to 21 m s-1. However, the intensity of the observed nocturnal LLJ in Yerevan (located at 110 m above ground) is strongly underestimated by most of the WRF runs while the Shin and Hong and YSU PBL schemes simulate nocturnal LLJs higher than the observed LLJ. The WRF runs initiated with newly released European Centre for Medium-Range Weather Forecasts ERA-5 data set showed improved simulation of near-surface winds and nighttime potential temperatures in Yerevan relative to those forced by the Global Forecast System fields.

  11. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    CERN Document Server

    Blanke, Monika; Recksiegel, Stefan

    2016-04-02

    The Littlest Higgs Model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. We present a new analysis of quark observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare $K$ and $B$ decays are still allowed to depart from their SM values. This includes $K^+\\to\\pi^+\

  12. The effect of treadmill running on passive avoidance learning in animal model of Alzheimer disease

    OpenAIRE

    Nasrin Hosseini; Hojjatallah Alaei; Parham Reisi; Maryam Radahmadi

    2013-01-01

    Background : Alzheimer′s disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer′s disease. Previous studies reported that exercise...

  13. Biomechanics trends in modeling and simulation

    CERN Document Server

    Ogden, Ray

    2017-01-01

    The book presents a state-of-the-art overview of biomechanical and mechanobiological modeling and simulation of soft biological tissues. Seven well-known scientists working in that particular field discuss topics such as biomolecules, networks and cells as well as failure, multi-scale, agent-based, bio-chemo-mechanical and finite element models appropriate for computational analysis. Applications include arteries, the heart, vascular stents and valve implants as well as adipose, brain, collagenous and engineered tissues. The mechanics of the whole cell and sub-cellular components as well as the extracellular matrix structure and mechanotransduction are described. In particular, the formation and remodeling of stress fibers, cytoskeletal contractility, cell adhesion and the mechanical regulation of fibroblast migration in healing myocardial infarcts are discussed. The essential ingredients of continuum mechanics are provided. Constitutive models of fiber-reinforced materials with an emphasis on arterial walls ...

  14. Simulations, evaluations and models. Vol. 1

    International Nuclear Information System (INIS)

    Brehmer, B.; Leplat, J.

    1992-01-01

    Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)

  15. Traffic flow dynamics data, models and simulation

    CERN Document Server

    Treiber, Martin

    2013-01-01

    This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on ...

  16. Modelling and Simulation for Major Incidents

    Directory of Open Access Journals (Sweden)

    Eleonora Pacciani

    2015-11-01

    Full Text Available In recent years, there has been a rise in Major Incidents with big impact on the citizens health and the society. Without the possibility of conducting live experiments when it comes to physical and/or toxic trauma, only an accurate in silico reconstruction allows us to identify organizational solutions with the best possible chance of success, in correlation with the limitations on available resources (e.g. medical team, first responders, treatments, transports, and hospitals availability and with the variability of the characteristic of event (e.g. type of incident, severity of the event and type of lesions. Utilizing modelling and simulation techniques, a simplified mathematical model of physiological evolution for patients involved in physical and toxic trauma incident scenarios has been developed and implemented. The model formalizes the dynamics, operating standards and practices of medical response and the main emergency service in the chain of emergency management during a Major Incident.

  17. A Java simulator of Rescorla and Wagner's prediction error model and configural cue extensions.

    Science.gov (United States)

    Alonso, Eduardo; Mondragón, Esther; Fernández, Alberto

    2012-10-01

    In this paper we present the "R&W Simulator" (version 3.0), a Java simulator of Rescorla and Wagner's prediction error model of learning. It is able to run whole experimental designs, and compute and display the associative values of elemental and compound stimuli simultaneously, as well as use extra configural cues in generating compound values; it also permits change of the US parameters across phases. The simulator produces both numerical and graphical outputs, and includes a functionality to export the results to a data processor spreadsheet. It is user-friendly, and built with a graphical interface designed to allow neuroscience researchers to input the data in their own "language". It is a cross-platform simulator, so it does not require any special equipment, operative system or support program, and does not need installation. The "R&W Simulator" (version 3.0) is available free. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  19. Design of ProjectRun21

    DEFF Research Database (Denmark)

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik

    2017-01-01

    training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate...... of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes...... the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. DISCUSSION: ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running...

  20. Influential factors of red-light running at signalized intersection and prediction using a rare events logistic regression model.

    Science.gov (United States)

    Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan

    2016-10-01

    Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Heinrich events modeled in transient glacial simulations

    Science.gov (United States)

    Ziemen, Florian; Kapsch, Marie; Mikolajewicz, Uwe

    2017-04-01

    Heinrich events are among the most prominent events of climate variability recorded in proxies across the northern hemisphere. They are the archetype of ice sheet — climate interactions on millennial time scales. Nevertheless, the exact mechanisms that cause Heinrich events are still under debate, and their climatic consequences are far from being fully understood. We address open questions by studying Heinrich events in a coupled ice sheet model (ISM) atmosphere-ocean-vegetation general circulation model (AOVGCM) framework, where this variability occurs as part of the model generated internal variability. The framework consists of a northern hemisphere setup of the modified Parallel Ice Sheet Model (mPISM) coupled to the global AOVGCM ECHAM5/MPIOM/LPJ. The simulations were performed fully coupled and with transient orbital and greenhouse gas forcing. They span from several millennia before the last glacial maximum into the deglaciation. To make these long simulations feasible, the atmosphere is accelerated by a factor of 10 relative to the other model components using a periodical-synchronous coupling technique. To disentangle effects of the Heinrich events and the deglaciation, we focus on the events occurring before the deglaciation. The modeled Heinrich events show a peak ice discharge of about 0.05 Sv and raise the sea level by 2.3 m on average. The resulting surface water freshening reduces the Atlantic meridional overturning circulation and ocean heat release. The reduction in ocean heat release causes a sub-surface warming and decreases the air temperature and precipitation regionally and downstream into Eurasia. The surface elevation decrease of the ice sheet enhances moisture transport onto the ice sheet and thus increases precipitation over the Hudson Bay area, thereby accelerating the recovery after an event.

  2. Simulation and Empirical Studies of the Commercial SI Engine Performance and Its Emission Levels When Running on a CNG and Hydrogen Blend

    Directory of Open Access Journals (Sweden)

    Rafaa Saaidia

    2017-12-01

    Full Text Available This article is a report on a simulation based on Computational Fluid Dynamics (CFD and an empirical investigation of in-cylinder flow characteristics, In addition, it assesses the performance and emission levels of a commercial-spark ignited engine running on a CNG and Hydrogen blend in different ratios. The main objective was to determine the optimum hydrogen ratio that would yield the best brake torque and release the least polluting gases. The in-cylinder flow velocity and turbulence aspects were investigated during the intake stroke in order to analyze the intake flow behavior. To reach this goal, a 3D CFD code was adopted. For various engine speeds were investigated for gasoline, CNG and hydrogen and CNG blend (HCNG fueled engines via external mixtures. The variation of brake torque (BT, NOX and CO emissions. A series of tests were conducted on the engine within the speed range of 1000 to 5000 rpm. For this purpose, a commercial Hyundai Sonata S.I engine was modified to operate with a blend of CNG and Hydrogen in different ratios. The experiments attempted to determine the optimum allowable hydrogen ratio with CNG for normal engine operation. The engine performance and the emission levels were also analyzed. At the engine speed of 4200 rpm, the results revealed that beyond a ratio of 50% of the volume of hydrogen added to CNG a backfire phenomenon appeared. Below this ratio (0~40% of the hydrogen volume, the CNG and Hydrogen blend seemed to be beneficial for the engine performance and for curtailing the emission level. However, at low engine speeds, the NOX concentration increased simultaneously with hydrogen content. In contrast, at high engine speeds, the NOX concentration decreased to its lowest level compared to that reached with gasoline as a running fuel. The concentration levels of HC, CO2, and CO decreased with the increase of hydrogen percentage.

  3. Simulation of arc models with the block modelling method

    NARCIS (Netherlands)

    Thomas, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2015-01-01

    Simulation of current interruption is currently performed with non-ideal switching devices for large power systems. Nevertheless, for small networks, non-ideal switching devices can be substituted by arc models. However, this substitution has a negative impact on the computation time. At the same

  4. Modeling lignin polymerization. Part 1: simulation model of dehydrogenation polymers.

    NARCIS (Netherlands)

    F.R.D. van Parijs (Frederik); K. Morreel; J. Ralph; W. Boerjan; R.M.H. Merks (Roeland)

    2010-01-01

    htmlabstractLignin is a heteropolymer that is thought to form in the cell wall by combinatorial radical coupling of monolignols. Here, we present a simulation model of in vitro lignin polymerization, based on the combinatorial coupling theory, which allows us to predict the reaction conditions

  5. Simulation of Mercury's magnetosheath with a combined hybrid-paraboloid model

    Science.gov (United States)

    Parunakian, David; Dyadechkin, Sergey; Alexeev, Igor; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku

    2017-08-01

    In this paper we introduce a novel approach for modeling planetary magnetospheres that involves a combination of the hybrid model and the paraboloid magnetosphere model (PMM); we further refer to it as the combined hybrid model. While both of these individual models have been successfully applied in the past, their combination enables us both to overcome the traditional difficulties of hybrid models to develop a self-consistent magnetic field and to compensate the lack of plasma simulation in the PMM. We then use this combined model to simulate Mercury's magnetosphere and investigate the geometry and configuration of Mercury's magnetosheath controlled by various conditions in the interplanetary medium. The developed approach provides a unique comprehensive view of Mercury's magnetospheric environment for the first time. Using this setup, we compare the locations of the bow shock and the magnetopause as determined by simulations with the locations predicted by stand-alone PMM runs and also verify the magnetic and dynamic pressure balance at the magnetopause. We also compare the results produced by these simulations with observational data obtained by the magnetometer on board the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft along a dusk-dawn orbit and discuss the signatures of the magnetospheric features that appear in these simulations. Overall, our analysis suggests that combining the semiempirical PMM with a self-consistent global kinetic model creates new modeling possibilities which individual models cannot provide on their own.

  6. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Science.gov (United States)

    Rummukainen, M.; Räisänen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willén, U.; Hansson, U.; Jones, C.

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results.

  7. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rummukainen, M.; Raeisaenen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willen, U.; Hansson, U.; Jones, C. [Rossby Centre, Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden)

    2001-03-01

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results. (orig.)

  8. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  9. Software to Enable Modeling & Simulation as a Service

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop a Modeling and Simulation as a Service (M&SaaS) software service infrastructure to enable most modeling and simulation (M&S) activities to be...

  10. On the energetics of quadrupedal running: predicting the metabolic cost of transport via a flexible-torso model.

    Science.gov (United States)

    Cao, Qu; Poulakakis, Ioannis

    2015-09-03

    In this paper, the effect of torso flexibility on the energetics of quadrupedal bounding is examined in a template setting. Two reductive sagittal-plane models, one with a rigid, non-deformable torso and one with a flexible, unactuated torso are proposed. Both models feature non-trivial leg mass and inertia to capture the energy associated with repositioning the legs after liftoff as well as the energy lost due to impacts. Bounding motions that minimize the cost of transport are generated for both models via a simple controller that coordinates leg recirculation. Comparisons reveal that torso compliance promotes locomotion efficiency by facilitating leg recirculation in anticipation of touchdown at speeds that are sufficiently high. Furthermore, by considering non-ideal torque generating and compliant elements with biologically reasonable efficiency values, it is shown that the flexible-torso model can predict the metabolic cost of transport for different animals, estimated using measurements of oxygen consumption. This way, the proposed model offers a means for approximating the energetic cost of transport of running quadrupeds in a simple and direct fashion.

  11. Fast Atmosphere-Ocean Model Runs with Large Changes in CO2

    Science.gov (United States)

    Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.

    2013-01-01

    How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.

  12. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    Science.gov (United States)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  13. Modelling and simulation of railway cable systems

    Energy Technology Data Exchange (ETDEWEB)

    Teichelmann, G.; Schaub, M.; Simeon, B. [Technische Univ. Muenchen, Garching (Germany). Zentrum Mathematik M2

    2005-12-15

    Mathematical models and numerical methods for the computation of both static equilibria and dynamic oscillations of railroad catenaries are derived and analyzed. These cable systems form a complex network of string and beam elements and lead to coupled partial differential equations in space and time where constraints and corresponding Lagrange multipliers express the interaction between carrier, contact wire, and pantograph head. For computing static equilibria, three different algorithms are presented and compared, while the dynamic case is treated by a finite element method in space, combined with stabilized time integration of the resulting differential algebraic system. Simulation examples based on reference data from industry illustrate the potential of such computational tools. (orig.)

  14. Petroleum reservoir data for testing simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, J.M.; Harrison, W.

    1980-09-01

    This report consists of reservoir pressure and production data for 25 petroleum reservoirs. Included are 5 data sets for single-phase (liquid) reservoirs, 1 data set for a single-phase (liquid) reservoir with pressure maintenance, 13 data sets for two-phase (liquid/gas) reservoirs and 6 for two-phase reservoirs with pressure maintenance. Also given are ancillary data for each reservoir that could be of value in the development and validation of simulation models. A bibliography is included that lists the publications from which the data were obtained.

  15. Simulation model for port shunting yards

    Science.gov (United States)

    Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.

    2016-08-01

    Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.

  16. Modeling VOC transport in simulated waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Gresham, G.L.; Peterson, E.S.; Rae, C.; Hotz, N.J.; Connolly, M.J.

    1993-06-01

    A volatile organic compound (VOC) transport model has been developed to describe unsteady-state VOC permeation and diffusion within a waste drum. Model equations account for three primary mechanisms for VOC transport from a void volume within the drum. These mechanisms are VOC permeation across a polymer boundary, VOC diffusion across an opening in a volume boundary, and VOC solubilization in a polymer boundary. A series of lab-scale experiments was performed in which the VOC concentration was measured in simulated waste drums under different conditions. A lab-scale simulated waste drum consisted of a sized-down 55-gal metal drum containing a modified rigid polyethylene drum liner. Four polyethylene bags were sealed inside a large polyethylene bag, supported by a wire cage, and placed inside the drum liner. The small bags were filled with VOC-air gas mixture and the VOC concentration was measured throughout the drum over a period of time. Test variables included the type of VOC-air gas mixtures introduced into the small bags, the small bag closure type, and the presence or absence of a variable external heat source. Model results were calculated for those trials where the VOC permeability had been measured. Permeabilities for five VOCs [methylene chloride, 1,1,2-trichloro-1,2,2-trifluoroethane (Freon-113), 1,1,1-trichloroethane, carbon tetrachloride, and trichloroethylene] were measured across a polyethylene bag. Comparison of model and experimental results of VOC concentration as a function of time indicate that model accurately accounts for significant VOC transport mechanisms in a lab-scale waste drum

  17. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  18. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  19. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  20. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  1. Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca

    Directory of Open Access Journals (Sweden)

    Luca Aceto

    2011-07-01

    Full Text Available In this paper we propose an extension of the Rebeca language that can be used to model distributed and asynchronous systems with timing constraints. We provide the formal semantics of the language using Structural Operational Semantics, and show its expressiveness by means of examples. We developed a tool for automated translation from timed Rebeca to the Erlang language, which provides a first implementation of timed Rebeca. We can use the tool to set the parameters of timed Rebeca models, which represent the environment and component variables, and use McErlang to run multiple simulations for different settings. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model represents the service oriented architecture, while the computational model matches the network infrastructure. Simulation is shown to be an effective analysis support, specially where model checking faces almost immediate state explosion in an asynchronous setting.

  2. Regional on-road vehicle running emissions modeling and evaluation for conventional and alternative vehicle technologies.

    Science.gov (United States)

    Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M

    2009-11-01

    This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.

  3. User's guide for simulation of waste treatment (SWAT) model

    Energy Technology Data Exchange (ETDEWEB)

    Macal, C.M.

    1979-04-01

    This document is a user's guide for the Simulation of Waste Treatment (SWAT) model computer code. (A detailed description of the logic and assumptions of the model was published previously.) A flow diagram depicting the logic of the SWAT computer code is included. Several river basins or regions can be simulated in a single computer run, with each region having numerous treatment plants. Treatment plants are simulated sequentially to reduce computer storage requirements. All input to the model is in the form of cards and all output is to a line printer. The code is written in FORTRAN IV and consists of approximately 3000 statements. Using the IBM 370/195 under OS, a Gl compiler requires a region of 220K. Execution time is under two minutes for a typical run for a river basin with 23 treatment plants, with each plant having an average of one technology modification over a simulation period of 25 years. In the first section of this report a brief description of the subroutines in the model is given along with an explanation of how the subroutines function in the context of the whole program. The third section indicates formatting for input data; sample input data for a test problem are also presented. Section 4 describes the output resulting from the sample input data. A program listing appears in the appendix.

  4. Molecular models and simulations of layered materials

    International Nuclear Information System (INIS)

    Kalinichev, Andrey G.; Cygan, Randall Timothy; Heinz, Hendrik; Greathouse, Jeffery A.

    2008-01-01

    The micro- to nano-sized nature of layered materials, particularly characteristic of naturally occurring clay minerals, limits our ability to fully interrogate their atomic dispositions and crystal structures. The low symmetry, multicomponent compositions, defects, and disorder phenomena of clays and related phases necessitate the use of molecular models and modern simulation methods. Computational chemistry tools based on classical force fields and quantum-chemical methods of electronic structure calculations provide a practical approach to evaluate structure and dynamics of the materials on an atomic scale. Combined with classical energy minimization, molecular dynamics, and Monte Carlo techniques, quantum methods provide accurate models of layered materials such as clay minerals, layered double hydroxides, and clay-polymer nanocomposites

  5. VISION: Verifiable Fuel Cycle Simulation Model

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Abdellatif M. Yacout; Gretchen E. Matthern; Steven J. Piet; David E. Shropshire

    2009-04-01

    The nuclear fuel cycle is a very complex system that includes considerable dynamic complexity as well as detail complexity. In the nuclear power realm, there are experts and considerable research and development in nuclear fuel development, separations technology, reactor physics and waste management. What is lacking is an overall understanding of the entire nuclear fuel cycle and how the deployment of new fuel cycle technologies affects the overall performance of the fuel cycle. The Advanced Fuel Cycle Initiative’s systems analysis group is developing a dynamic simulation model, VISION, to capture the relationships, timing and delays in and among the fuel cycle components to help develop an understanding of how the overall fuel cycle works and can transition as technologies are changed. This paper is an overview of the philosophy and development strategy behind VISION. The paper includes some descriptions of the model and some examples of how to use VISION.

  6. Modeling and visual simulation of Microalgae photobioreactor

    Science.gov (United States)

    Zhao, Ming; Hou, Dapeng; Hu, Dawei

    Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.

  7. An improved numerical scheme with the fully-implicit two-fluid model for a fast-running system code

    International Nuclear Information System (INIS)

    Jeong, J.J.; No, H.C.

    1987-01-01

    A new computational method is implemented in the FIDA-2 (Fully-Implicit Safety Analysis-2) code to simulate the thermal-hydraulic response to hypothetical accidents in nuclear power plants. The basis field equations of FISA-2 consist of the mixture continuity equation, void propagation equation, two phasic momentum equations, and two phasic energy equations. The fully-implicit scheme is used to elimate a time step limitation and the computation time per time step is minimized as much as possible by reducing the matrix-size to be solved. The phasic energy equations written in the nonconservation form are solved after they are set up to be decoupled from other field equations. The void propagation equation is solved to obtain the void fraction. Spatial acceleration terms in the phasic momentum equations are manipulated with the phasic continiuity equations so that pseudo-phasic mass flux may be expressed in terms of pressure only. Putting the pseudo-phasic mass flux into the mixture continuity equation, we obtain linear equations with pressure variables only as unknowns. By solving the linear equations, pressures at all the nodes are obtained and in turn other variables are obtained by back-substitution. The above procedure is performed until the convergence criterion is satisfied. Reasonable accuracy and no stability limitation with fast-running are confirmed by comparing results from FISA-2 with experimental data and results from other codes. (orig.)

  8. A rainfall simulation model for agricultural development in Bangladesh

    Directory of Open Access Journals (Sweden)

    M. Sayedur Rahman

    2000-01-01

    Full Text Available A rainfall simulation model based on a first-order Markov chain has been developed to simulate the annual variation in rainfall amount that is observed in Bangladesh. The model has been tested in the Barind Tract of Bangladesh. Few significant differences were found between the actual and simulated seasonal, annual and average monthly. The distribution of number of success is asymptotic normal distribution. When actual and simulated daily rainfall data were used to drive a crop simulation model, there was no significant difference of rice yield response. The results suggest that the rainfall simulation model perform adequately for many applications.

  9. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  10. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  11. Plasma simulation studies using multilevel physics models

    Energy Technology Data Exchange (ETDEWEB)

    Park, W.; Belova, E.V.; Fu, G.Y. [and others

    2000-01-19

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future.

  12. Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata

    Directory of Open Access Journals (Sweden)

    Yu Zou

    2013-07-01

    Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.

  13. Modeling and numerical simulations of the influenced Sznajd model

    Science.gov (United States)

    Karan, Farshad Salimi Naneh; Srinivasan, Aravinda Ramakrishnan; Chakraborty, Subhadeep

    2017-08-01

    This paper investigates the effects of independent nonconformists or influencers on the behavioral dynamic of a population of agents interacting with each other based on the Sznajd model. The system is modeled on a complete graph using the master equation. The acquired equation has been numerically solved. Accuracy of the mathematical model and its corresponding assumptions have been validated by numerical simulations. Regions of initial magnetization have been found from where the system converges to one of two unique steady-state PDFs, depending on the distribution of influencers. The scaling property and entropy of the stationary system in presence of varying level of influence have been presented and discussed.

  14. Atmospheric Model Evaluation Tool for meteorological and air quality simulations

    Science.gov (United States)

    The Atmospheric Model Evaluation Tool compares model predictions to observed data from various meteorological and air quality observation networks to help evaluate meteorological and air quality simulations.

  15. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  16. Do we need full mesoscale models to simulate the urban heat island? A study over the city of Barcelona.

    Science.gov (United States)

    García-Díez, Markel; Ballester, Joan; De Ridder, Koen; Hooyberghs, Hans; Lauwaet, Dirk; Rodó, Xavier

    2016-04-01

    As most of the population lives in urban environments, the simulation of the urban climate has become an important part of the global climate change impact assessment. However, due to the high resolution required, these simulations demand a large amount of computational resources. Here we present a comparison between a simplified fast urban climate model (UrbClim) and a widely used full mesoscale model, the Weather Research and Forecasting (WRF) model, over the city of Barcelona. In order to check the advantages and disadvantages of each approach, both simulations were compared with station data and with land surface temperature observations retrieved by satellites, focusing on the urban heat island. The effect of changing the UrbClim boundary conditions was studied too, by using low resolution global reanalysis data (70 km) and a higher resolution forecast model (15 km). Finally, a strict comparison of the computational resources consumed by both models was carried out. Results show that, generally, the performance of the simple model is comparable to or better than the mesoscale model. The exception are the winds and the day-to-day correlation in the reanalysis driven run, but these problems disappear when taking the boundary conditions from a higher resolution global model. UrbClim was found to run 133 times faster than WRF, using 4x times higher resolution and, thus, it is an efficient solution for running long climate change simulations over large city ensembles.

  17. A 1000-year simulation with the IPSL ocean-atmosphere coupled model

    Directory of Open Access Journals (Sweden)

    S. Conil

    2003-06-01

    Full Text Available A 1000-year climate simulation is run with the ocean-atmosphere coupled model developed at the Institute Pierre- Simon Laplace (IPSL, Paris. No flux adjustment is used. The drift of the model is analyzed in terms of the seasurface temperature and deep ocean temperature. When the model's own equilibrium is reached, it is found that the Antarctic bottom water production experiences large-amplitude variation, oscillating between strong and weak episodes. This can yield oceanic temperature variation in the Southern Hemisphere and for the global mean.

  18. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  19. MODEL OF AIRCRAFT ELECTRICAL POWER SUPPLY SYSTEM CHANNEL OF ALTERNATIVE CURRENT RUNNING ON A GENERALIZED UNBALANCED THREE-PHASE LOAD

    Directory of Open Access Journals (Sweden)

    Aleksej Gennad'evich Demchenko

    2017-01-01

    Full Text Available This article is devoted to mathematical modeling of the channel of AC on-board power supply systems (PSS when running on static active-inductive load, connected on a "wye with neutral" and "delta". The mathematical model of aircraft synchronous generator, electricity distribution, three-phase static active-inductive load are considered. When making a mathematical description the author used the equations for the voltages of windings and flux linkages cir- cuits of the stator and rotor of the generator in a stationary system phase coordinates "ABC". When considering the mathe- matical model of the distribution system, the equations that took into account the drop of the voltages on the active and inductive resistance of the distribution system power wires were used. When considering the mathematical models of three- phase static loads connected on a "wye with neutral" and "delta", the equations that took into account the drop of the volt- ages on the active and inductive resistance loads were used. The matrix equations system of channel PSS AC when running on a generalized three-phase static active-inductive load was obtained. The three phase static loads scheme connected ac- cording to the "delta" scheme was converted to "wye" to simplify the solution of channel PSS AC circuit matrix equations system. The choice of the phase coordinates system "ABC" for the mathematical description of the generator, distribution system and the static load was made due to its advantage over the coordinate system "dq", because the equation written in phase coordinates are valid for symmetric and asymmetric modes of the generator, while the equations written in the coor- dinate system "dq" will be valid only for symmetric modes. As a result of joint solution of the generator equations, distribution system, three-phase static loads there were obtained the formulae for the generator stator winding phases, gen- erator phases currents, the voltage drops on the load

  20. Tecnomatix Plant Simulation modeling and programming by means of examples

    CERN Document Server

    Bangsow, Steffen

    2015-01-01

    This book systematically introduces the development of simulation models as well as the implementation and evaluation of simulation experiments with Tecnomatix Plant Simulation. It deals with all users of Plant Simulation, who have more complex tasks to handle. It also looks for an easy entry into the program. Particular attention has been paid to introduce the simulation flow language SimTalk and its use in various areas of the simulation. The author demonstrates with over 200 examples how to combine the blocks for simulation models and how to deal with SimTalk for complex control and analys

  1. Cycle Engine Modelling Of Spark Ignition Engine Processes during Wide-Open Throttle (WOT) Engine Operation Running By Gasoline Fuel

    International Nuclear Information System (INIS)

    Rahim, M F Abdul; Rahman, M M; Bakar, R A

    2012-01-01

    One-dimensional engine model is developed to simulate spark ignition engine processes in a 4-stroke, 4 cylinders gasoline engine. Physically, the baseline engine is inline cylinder engine with 3-valves per cylinder. Currently, the engine's mixture is formed by external mixture formation using piston-type carburettor. The model of the engine is based on one-dimensional equation of the gas exchange process, isentropic compression and expansion, progressive engine combustion process, and accounting for the heat transfer and frictional losses as well as the effect of valves overlapping. The model is tested for 2000, 3000 and 4000 rpm of engine speed and validated using experimental engine data. Results showed that the engine is able to simulate engine's combustion process and produce reasonable prediction. However, by comparing with experimental data, major discrepancy is noticeable especially on the 2000 and 4000 rpm prediction. At low and high engine speed, simulated cylinder pressures tend to under predict the measured data. Whereas the cylinder temperatures always tend to over predict the measured data at all engine speed. The most accurate prediction is obtained at medium engine speed of 3000 rpm. Appropriate wall heat transfer setup is vital for more precise calculation of cylinder pressure and temperature. More heat loss to the wall can lower cylinder temperature. On the hand, more heat converted to the useful work mean an increase in cylinder pressure. Thus, instead of wall heat transfer setup, the Wiebe combustion parameters are needed to be carefully evaluated for better results.

  2. A new climate modeling framework for convection-resolving simulation at continental scale

    Science.gov (United States)

    Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph

    2017-04-01

    Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology

  3. Nonlinear distortion in wireless systems modeling and simulation with Matlab

    CERN Document Server

    Gharaibeh, Khaled M

    2011-01-01

    This book covers the principles of modeling and simulation of nonlinear distortion in wireless communication systems with MATLAB simulations and techniques In this book, the author describes the principles of modeling and simulation of nonlinear distortion in single and multichannel wireless communication systems using both deterministic and stochastic signals. Models and simulation methods of nonlinear amplifiers explain in detail how to analyze and evaluate the performance of data communication links under nonlinear amplification. The book addresses the analysis of nonlinear systems

  4. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  5. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  6. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  7. Geologic simulation model for a hypothetical site in the Columbia Plateau. [AEGIS

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, G.M.; Zellmer, J.T.; Lindberg, J.W.; Foley, M.G.

    1981-04-01

    This report describes the structure and operation of the Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Geologic Simulation Model, a computer simulation model of the geology and hydrology of an area of the Columbia Plateau, Washington. The model is used to study the long-term suitability of the Columbia Plateau Basalts for the storage of nuclear waste in a mined repository. It is also a starting point for analyses of such repositories in other geologic settings. The Geologic Simulation Model will aid in formulating design disruptive sequences (i.e. those to be used for more detailed hydrologic, transport, and dose analyses) from the spectrum of hypothetical geological and hydrological developments that could result in transport of radionuclides out of a repository. Quantitative and auditable execution of this task, however, is impossible without computer simulation. The computer simulation model aids the geoscientist by generating the wide spectrum of possible future evolutionary paths of the areal geology and hydrology, identifying those that may affect the repository integrity. This allows the geoscientist to focus on potentially disruptive processes, or series of events. Eleven separate submodels are used in the simulation portion of the model: Climate, Continental Glaciation, Deformation, Geomorphic Events, Hydrology, Magmatic Events, Meteorite Impact, Sea-Level Fluctuations, Shaft-Seal Failure, Sub-Basalt Basement Faulting, and Undetected Features. Because of the modular construction of the model, each submodel can easily be replaced with an updated or modified version as new information or developments in the state of the art become available. The model simulates the geologic and hydrologic systems of a hypothetical repository site and region for a million years following repository decommissioning. The Geologic Simulation Model operates in both single-run and Monte Carlo modes.

  8. Geologic simulation model for a hypothetical site in the Columbia Plateau

    International Nuclear Information System (INIS)

    Petrie, G.M.; Zellmer, J.T.; Lindberg, J.W.; Foley, M.G.

    1981-04-01

    This report describes the structure and operation of the Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Geologic Simulation Model, a computer simulation model of the geology and hydrology of an area of the Columbia Plateau, Washington. The model is used to study the long-term suitability of the Columbia Plateau Basalts for the storage of nuclear waste in a mined repository. It is also a starting point for analyses of such repositories in other geologic settings. The Geologic Simulation Model will aid in formulating design disruptive sequences (i.e. those to be used for more detailed hydrologic, transport, and dose analyses) from the spectrum of hypothetical geological and hydrological developments that could result in transport of radionuclides out of a repository. Quantitative and auditable execution of this task, however, is impossible without computer simulation. The computer simulation model aids the geoscientist by generating the wide spectrum of possible future evolutionary paths of the areal geology and hydrology, identifying those that may affect the repository integrity. This allows the geoscientist to focus on potentially disruptive processes, or series of events. Eleven separate submodels are used in the simulation portion of the model: Climate, Continental Glaciation, Deformation, Geomorphic Events, Hydrology, Magmatic Events, Meteorite Impact, Sea-Level Fluctuations, Shaft-Seal Failure, Sub-Basalt Basement Faulting, and Undetected Features. Because of the modular construction of the model, each submodel can easily be replaced with an updated or modified version as new information or developments in the state of the art become available. The model simulates the geologic and hydrologic systems of a hypothetical repository site and region for a million years following repository decommissioning. The Geologic Simulation Model operates in both single-run and Monte Carlo modes

  9. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  10. Intraseasonal Variability of the Indian Monsoon as Simulated by a Global Model

    Science.gov (United States)

    Joshi, Sneh; Kar, S. C.

    2018-01-01

    This study uses the global forecast system (GFS) model at T126 horizontal resolution to carry out seasonal simulations with prescribed sea-surface temperatures. Main objectives of the study are to evaluate the simulated Indian monsoon variability in intraseasonal timescales. The GFS model has been integrated for 29 monsoon seasons with 15 member ensembles forced with observed sea-surface temperatures (SSTs) and additional 16-member ensemble runs have been carried out using climatological SSTs. Northward propagation of intraseasonal rainfall anomalies over the Indian region from the model simulations has been examined. It is found that the model is unable to simulate the observed moisture pattern when the active zone of convection is over central India. However, the model simulates the observed pattern of specific humidity during the life cycle of northward propagation on day - 10 and day + 10 of maximum convection over central India. The space-time spectral analysis of the simulated equatorial waves shows that the ensemble members have varying amount of power in each band of wavenumbers and frequencies. However, variations among ensemble members are more in the antisymmetric component of westward moving waves and maximum difference in power is seen in the 8-20 day mode among ensemble members.

  11. Application of new simulation algorithms for modeling rf diagnostics of electron clouds

    International Nuclear Information System (INIS)

    Veitzer, Seth A.; Smithe, David N.; Stoltz, Peter H.

    2012-01-01

    Traveling wave rf diagnostics of electron cloud build-up show promise as a non-destructive technique for measuring plasma density and the efficacy of mitigation techniques. However, it is very difficult to derive an absolute measure of plasma density from experimental measurements for a variety of technical reasons. Detailed numerical simulations are vital in order to understand experimental data, and have successfully modeled build-up. Such simulations are limited in their ability to reproduce experimental data due to the large separation of scales inherent to the problem. Namely, one must resolve both rf frequencies in the GHz range, as well as the plasma modulation frequency of tens of MHz, while running for very long simulations times, on the order of microseconds. The application of new numerical simulation techniques allow us to bridge the simulation scales in this problem and produce spectra that can be directly compared to experiments. The first method is to use a plasma dielectric model to measure plasma-induced phase shifts in the rf wave. The dielectric is modulated at a low frequency, simulating the effects of multiple bunch crossings. This allows simulations to be performed without kinetic particles representing the plasma, which both speeds up the simulations as well as reduces numerical noise from interpolation of particle charge and currents onto the computational grid. Secondly we utilize a port boundary condition model to simultaneously absorb rf at the simulation boundaries, and to launch the rf into the simulation. This method improves the accuracy of simulations by restricting rf frequencies better than adding an external (finite) current source to drive rf, and absorbing layers at the boundaries. We also explore the effects of non-uniform plasma densities on the simulated spectra.

  12. Friction and lubrication modelling in sheet metal forming simulations of the Volvo XC90 inner door

    Science.gov (United States)

    Sigvant, M.; Pilthammar, J.; Hol, J.; Wiebenga, J. H.; Chezan, T.; Carleer, B.; van den Boogaard, A. H.

    2016-08-01

    The quality of sheet metal formed parts is strongly dependent on the friction and lubrication conditions that are acting in the actual production process. Although friction is of key importance, it is currently not considered in detail in stamping simulations. This paper presents project results considering friction and lubrication modelling in stamping simulations of the Volvo XC90 inner door. For this purpose, the TriboForm software is used in combination with the AutoForm software. Validation of the simulation results is performed based on door-inner parts taken from the press line in a full-scale production run. The project results demonstrate the improved prediction accuracy of stamping simulations.

  13. Friction and lubrication modeling in sheet metal forming simulations of a Volvo XC90 inner door

    Science.gov (United States)

    Sigvant, M.; Pilthammar, J.; Hol, J.; Wiebenga, J. H.; Chezan, T.; Carleer, B.; van den Boogaard, A. H.

    2016-11-01

    The quality of sheet metal formed parts is strongly dependent on the tribology, friction and lubrication conditions that are acting in the actual production process. Although friction is of key importance, it is currently not considered in detail in stamping simulations. This paper presents a selection of results considering friction and lubrication modeling in sheet metal forming simulations of the Volvo XC90 right rear door inner. For this purpose, the TriboForm software is used in combination with the AutoForm software. Validation of the simulation results is performed using door inner parts taken from the press line in a full-scale production run. The results demonstrate the improved prediction accuracy of stamping simulations by accounting for accurate friction and lubrication conditions, and the strong influence of friction conditions on both the part quality and the overall production stability.

  14. Interface between Core/TH Model and Simulator for OPR1000

    International Nuclear Information System (INIS)

    Hwang, Do Hyun; Lee, Myeong Soo; Hong, Jin Hyuk; Lee, Seung Ho; Suh, Jung Kwan

    2009-01-01

    OPR1000 simulator for ShinKori-Unit 1, which will be operated at 2815MWt of thermal core power, is being developed while the ShinKori-Unit 1 and 2 is being built. OPR1000 simulator adopted the RELAP5 R/T code, which is the adaptation of RELAP5 and NESTLE codes to run in real-time mode with graphical visualization, to model Nuclear Steam Supply System (NSSS) Thermal-Hydraulics (TH) and Reactor Core. The RELAP5 is an advanced, best estimate, reactor TH simulation code developed at Idaho National Engineering and Environment Laboratory(INEEL) and the NESTLE is a true two-energy group neutronics code that computes the neutron flux and power for each node at every time step. As a simulator environment, the 3KEYMASTER TM , a commercial environment tool of WSC is used

  15. Failure Diameter of PBX 9502: Simulations with the SURFplus model

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-03

    SURFplus is a reactive burn model for high explosives aimed at modelling shock initiation and propagation of detonation waves. It utilizes the SURF model for the fast hot-spot reaction plus a slow reaction for the energy released by carbon clustering. A feature of the SURF model is that there is a partially decoupling between burn rate parameters and detonation wave properties. Previously, parameters for PBX 9502 that control shock ini- tiation had been calibrated to Pop plot data (distance-of-run to detonation as a function of shock pressure initiating the detonation). Here burn rate parameters for the high pres- sure regime are adjusted to t the failure diameter and the limiting detonation speed just above the failure diameter. Simulated results are shown for an uncon ned rate stick when the 9502 diameter is slightly above and slightly below the failure diameter. Just above the failure diameter, in the rest frame of the detonation wave, the front is sonic at the PBX/air interface. As a consequence, the lead shock in the neighborhood of the interface is supported by the detonation pressure in the interior of the explosive rather than the reaction immediately behind the front. In the interior, the sonic point occurs near the end of the fast hot-spot reaction. Consequently, the slow carbon clustering reaction can not a ect the failure diameter. Below the failure diameter, the radial extent of the detonation front decreases starting from the PBX/air interface. That is, the failure starts at the PBX boundary and propagates inward to the axis of the rate stick.

  16. Medium-term erosion simulation of an abandoned mine site using the SIBERIA landscape evolution model

    International Nuclear Information System (INIS)

    Hancock, G.R.; Willgoose, G.R.

    2000-01-01

    This study forms part of a collaborative project designed to validate the long-term erosion predictions of the SIBERIA landform evolution model on rehabilitated mine sites. The SIBERIA catchment evolution model can simulate the evolution of landforms resulting from runoff and erosion over many years. SIBERIA needs to be calibrated before evaluating whether it correctly models the observed evolution of rehabilitated mine landforms. A field study to collect data to calibrate SIBERIA was conducted at the abandoned Scinto 6 uranium mine located in the Kakadu Region, Northern Territory, Australia. The data were used to fit parameter values to a sediment loss model and a rainfall-runoff model. The derived runoff and erosion model parameter values were used in SIBERIA to simulate 50 years of erosion by concentrated flow on the batters of the abandoned site. The SIBERIA runs correctly simulated the geomorphic development of the gullies on the man-made batters of the waste rock dump. The observed gully position, depth, volume, and morphology on the waste rock dump were quantitatively compared with the SIBERIA simulations. The close similarities between the observed and simulated gully features indicate that SIBERIA can accurately predict the rate of gully development on a man-made post-mining landscape over periods of up to 50 years. SIBERIA is an appropriate model for assessment of erosional stability of rehabilitated mine sites over time spans of around 50 years. Copyright (2000) CSIRO Australia

  17. Modelling and Simulation of Search Engine

    Science.gov (United States)

    Nasution, Mahyuddin K. M.

    2017-01-01

    The best tool currently used to access information is a search engine. Meanwhile, the information space has its own behaviour. Systematically, an information space needs to be familiarized with mathematics so easily we identify the characteristics associated with it. This paper reveal some characteristics of search engine based on a model of document collection, which are then estimated the impact on the feasibility of information. We reveal some of characteristics of search engine on the lemma and theorem about singleton and doubleton, then computes statistically characteristic as simulating the possibility of using search engine. In this case, Google and Yahoo. There are differences in the behaviour of both search engines, although in theory based on the concept of documents collection.

  18. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  19. Modeling and simulation technology readiness levels.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.; Marburger, S. J.; Trucano, Timothy Guy

    2006-01-01

    This report summarizes the results of an effort to establish a framework for assigning and communicating technology readiness levels (TRLs) for the modeling and simulation (ModSim) capabilities at Sandia National Laboratories. This effort was undertaken as a special assignment for the Weapon Simulation and Computing (WSC) program office led by Art Hale, and lasted from January to September 2006. This report summarizes the results, conclusions, and recommendations, and is intended to help guide the program office in their decisions about the future direction of this work. The work was broken out into several distinct phases, starting with establishing the scope and definition of the assignment. These are characterized in a set of key assertions provided in the body of this report. Fundamentally, the assignment involved establishing an intellectual framework for TRL assignments to Sandia's modeling and simulation capabilities, including the development and testing of a process to conduct the assignments. To that end, we proposed a methodology for both assigning and understanding the TRLs, and outlined some of the restrictions that need to be placed on this process and the expected use of the result. One of the first assumptions we overturned was the notion of a ''static'' TRL--rather we concluded that problem context was essential in any TRL assignment, and that leads to dynamic results (i.e., a ModSim tool's readiness level depends on how it is used, and by whom). While we leveraged the classic TRL results from NASA, DoD, and Sandia's NW program, we came up with a substantially revised version of the TRL definitions, maintaining consistency with the classic level definitions and the Predictive Capability Maturity Model (PCMM) approach. In fact, we substantially leveraged the foundation the PCMM team provided, and augmented that as needed. Given the modeling and simulation TRL definitions and our proposed assignment methodology, we

  20. Computational Modeling and Simulation of Developmental ...

    Science.gov (United States)

    Standard practice for assessing developmental toxicity is the observation of apical endpoints (intrauterine death, fetal growth retardation, structural malformations) in pregnant rats/rabbits following exposure during organogenesis. EPA’s computational toxicology research program (ToxCast) generated vast in vitro cellular and molecular effects data on >1858 chemicals in >600 high-throughput screening (HTS) assays. The diversity of assays has been increased for developmental toxicity with several HTS platforms, including the devTOX-quickPredict assay from Stemina Biomarker Discovery utilizing the human embryonic stem cell line (H9). Translating these HTS data into higher order-predictions of developmental toxicity is a significant challenge. Here, we address the application of computational systems models that recapitulate the kinematics of dynamical cell signaling networks (e.g., SHH, FGF, BMP, retinoids) in a CompuCell3D.org modeling environment. Examples include angiogenesis (angiodysplasia) and dysmorphogenesis. Being numerically responsive to perturbation, these models are amenable to data integration for systems Toxicology and Adverse Outcome Pathways (AOPs). The AOP simulation outputs predict potential phenotypes based on the in vitro HTS data ToxCast. A heuristic computational intelligence framework that recapitulates the kinematics of dynamical cell signaling networks in the embryo, together with the in vitro profiling data, produce quantitative predic