WorldWideScience

Sample records for model simulations run

  1. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  2. NUMERICAL SIMULATION OF SOLITARY WAVE RUN-UP AND OVERTOPPING USING BOUSSINESQ-TYPE MODEL

    Institute of Scientific and Technical Information of China (English)

    TSUNG Wen-Shuo; HSIAO Shih-Chun; LIN Ting-Chieh

    2012-01-01

    In this article,the use of a high-order Boussinesq-type model and sets of laboratory experiments in a large scale flume of breaking solitary waves climbing up slopes with two inclinations are presented to study the shoreline behavior of breaking and non-breaking solitary waves on plane slopes.The scale effect on run-up height is briefly discussed.The model simulation capability is well validated against the available laboratory data and present experiments.Then,serial numerical tests are conducted to study the shoreline motion correlated with the effects of beach slope and wave nonlinearity for breaking and non-breaking waves.The empirical formula proposed by Hsiao et al.for predicting the maximum run-up height of a breaking solitary wave on plane slopes with a wide range of slope inclinations is confirmed to be cautious.Furthermore,solitary waves impacting and overtopping an impermeable sloping seawall at various water depths are investigated.Laboratory data of run-up height,shoreline motion,free surface elevation and overtopping discharge are presented.Comparisons of run-up,run-down,shoreline trajectory and wave overtopping discharge are made.A fairly good agreement is seen between numerical results and experimental data.It elucidates that the present depth-integrated model can be used as an efficient tool for predicting a wide spectrum of coastal problems.

  3. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  4. Modeling and simulation of Cobot based on double over-running clutches

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-hong; ZHANG Li-xun

    2008-01-01

    In order to analyze characteristics of Cobot cooperation with a human in a shared workspacce, the model of a non-holonormic constraint joint mechanism and its control model were constructed based on double o-ver-running clutches. The simulation analysis was carried out and it validated passive and constraint features of the joint mechanism. In terms of Cobot components, the control model of Cobot following a desired trajectory was built up. The simulation studies illustrate that the Cobot can track a desired trajectory and possess passive and constraint features; a human supplies operation force that makes Cobot move, and a computer system con-trois its motion trajectory. So it can meet the requirements of Cobot collaboration with an operator. The Cobot model can be used in applications of material moving, parts assembly and some situations requiring man-ma-chine cooperation and so on.

  5. Population growth and economic development in the very long run: a simulation model of three revolutions.

    Science.gov (United States)

    Steinmann, G; Komlos, J

    1988-08-01

    The authors propose an economic model capable of simulating the 4 main historical stages of civilization: hunting, agricultural, industrial, and postindustrial. An output-maximizing society to respond to changes in factor endowments by switching technologies. Changes in factor proportions arise through population growth and capital accumulation. A slow rate of exogenous technical process is assumed. The model synthesizes Malthusian and Boserupian notions of the effect of population growth on per capita output. Initially the capital-diluting effect of population growth dominates. As population density increases, however, and a threshold is reached, the Boserupian effect becomes crucial, and a technological revolution occurs. The cycle is thereafter repeated. After the second economic revolution, however, the Malthusian constraint dissolves permanently, as population growth can continue without being constrained by diminishing returns to labor. By synthesizing Malthusian and Boserupian notions, the model is able to capture the salient features of economic development in the very long run.

  6. Development of a simulation model for compression ignition engine running with ignition improved blend

    Directory of Open Access Journals (Sweden)

    Sudeshkumar Ponnusamy Moranahalli

    2011-01-01

    Full Text Available Department of Automobile Engineering, Anna University, Chennai, India. The present work describes the thermodynamic and heat transfer models used in a computer program which simulates the diesel fuel and ignition improver blend to predict the combustion and emission characteristics of a direct injection compression ignition engine fuelled with ignition improver blend using classical two zone approach. One zone consists of pure air called non burning zone and other zone consist of fuel and combustion products called burning zone. First law of thermodynamics and state equations are applied in each of the two zones to yield cylinder temperatures and cylinder pressure histories. Using the two zone combustion model the combustion parameters and the chemical equilibrium composition were determined. To validate the model an experimental investigation has been conducted on a single cylinder direct injection diesel engine fuelled with 12% by volume of 2- ethoxy ethanol blend with diesel fuel. Addition of ignition improver blend to diesel fuel decreases the exhaust smoke and increases the thermal efficiency for the power outputs. It was observed that there is a good agreement between simulated and experimental results and the proposed model requires low computational time for a complete run.

  7. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  8. Simulation of nonlinear wave run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    cases involving long wave resonance in a parabolic basin, solitary wave evolution in a triangular channel, and solitary wave run-up on a circular conical island are considered. In each case the computed results compare well against available analytical solutions or experimental measurements. The ability...

  9. The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.; Lin, Alexander S.

    2016-01-01

    This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.

  10. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    Science.gov (United States)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration

  11. Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are...

  12. CMS Full Simulation for Run-2

    CERN Document Server

    Hildreth, M; Lange, D J; Kortelainen, M J

    2015-01-01

    During LHC shutdown between run-1 and run-2 intensive developments were carried out to improve performance of CMS simulation. For physics improvements migration from Geant4 9.4p03 to Geant4 10.0p02 has been performed. CPU performance has been improved by introduction of the Russian roulette method inside CMS calorimeters, optimization of CMS simulation sub-libraries, and usage of statics build of the simulation executable. As a result of these efforts, CMS simulation has been speeded up by about factor two. In this work we provide description of updates for different software components of CMS simulation. Development of a multi-threaded (MT) simulation approach for CMS will be also discuss.

  13. Humans running in place on water at simulated reduced gravity.

    Directory of Open Access Journals (Sweden)

    Alberto E Minetti

    Full Text Available BACKGROUND: On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. METHODOLOGY/PRINCIPAL FINDINGS: We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. CONCLUSIONS/SIGNIFICANCE: The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.

  14. ATLAS simulation of boson plus jets processes in Run 2

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.

  15. Collaborative Simulation Run-time Management Environment Based on HLA

    Institute of Scientific and Technical Information of China (English)

    王江云; 柴旭东; 王行仁

    2002-01-01

    The Collaborative Simulation Run-time Management Environment based on HLA (CSRME) mainly focuses on simulation problems for the system design of the complex distributed simulation. CSRME can integrate all the simulation tools and simulation applications that comply with the well-documented interface standards defined by CSRME. CSRME supports both the interoperability of different simulations and the integration of simulation tools, as well as provides simulation run-time management, simulation time management and simulation data management. Finally, the distributed command training system is analyzed and realized to validate the theories of CSRME.

  16. 步进电机控制系统建模及运行曲线仿真%Modeling of stepper motor control system and running curve simulation

    Institute of Scientific and Technical Information of China (English)

    周黎; 杨世洪; 高晓东

    2011-01-01

    为了优化开环情况下步进电机的控制,研究运行曲线和传动刚度对步进电机开环控制系统运动情况的影响.依据步进电机运行原理和系统动力学特性,建立控制系统数学模型.设计一种基于正矢函数,高阶平滑的加减速曲线,并与常见的匀加减速曲线和指数型加减速曲线进行了比较仿真.仿真结果表明正矢型加减速曲线能够更好地抑制运动过程中的冲击,减小终点位置的残余振动幅度.该控制方式适用于对运动精度和稳定性有较高要求的场合,在分幅式航空相机的摆扫控制中得到了成功的应用.%In order to optimize the open-loop control strategy of stepper motors, the influence of running curve and transmission stiffness to the stepper motor open-loop control system was studied. The mathematical model of the control system was established according to the stepper motor running principles and system dynamics. A versine based acceleration and deceleration curve with high-order smoothness was designed and compared to the ordinary constant and exponential acceleration and deceleration curves by simulation. The results show that versine based curve works better in restraining the impulsion during the running and reducing the amplitude of residual vibration at the end of the running process. The proposed control strategy is suitable for applications that accuracy and stability are highly required and have been successfully applied to scanning control of a step framing arial camera.

  17. Gravitational Baryogenesis in Running Vacuum models

    CERN Document Server

    Oikonomou, V K; Nunes, Rafael C

    2016-01-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless if these models can produce a viable cosmological evolution, we demonstrate that they produce a non-zero baryon-to-entropy ratio even if the Universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data.

  18. Polarization simulations in the RHIC run 15 lattice

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Luo, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Ranjbar, V. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Robert-Demolaize, G. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; White, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2015-05-03

    RHIC polarized proton Run 15 uses a new acceleration ramp optics, compared to RHIC Run 13 and earlier runs, in relation with electron-lens beam-beam compensation developments. The new optics induces different strengths in the depolarizing snake resonance sequence, from injection to top energy. As a consequence, polarization transport along the new ramp has been investigated, based on spin tracking simulations. Sample results are reported and discussed.

  19. Pessimistic Predicate/Transform Model for Long Running Business Processes

    Institute of Scientific and Technical Information of China (English)

    WANG Jinling; JIN Beihong; LI Jing

    2005-01-01

    Many business processes in enterprise applications are both long running and transactional in nature. However, no current transaction model can provide full transaction support for such long running business processes. This paper proposes a new transaction model, the pessimistic predicate/transform (PP/T) model, which can provide full transaction support for long running business processes. A framework was proposed on the enterprise JavaBeans platform to implement the PP/T model. The framework enables application developers to focus on the business logic, with the underlying platform providing the required transactional semantics. The development and maintenance effort are therefore greatly reduced. Simulations show that the model has a sound concurrency management ability for long running business processes.

  20. A luminosity model of RHIC gold runs

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, S.Y.

    2011-11-01

    In this note, we present a luminosity model for RHIC gold runs. The model is applied to the physics fills in 2007 run without cooling, and with the longitudinal cooling applied to one beam only. Having good comparison, the model is used to project a fill with the longitudinal cooling applied to both beams. Further development and possible applications of the model are discussed. To maximize the integrated luminosity, usually the higher beam intensity, smaller longitudinal and transverse emittance, and smaller {beta} are the directions to work on. In past 10 years, the RHIC gold runs have demonstrated a path toward this goal. Most recently, a successful commissioning of the bunched beam stochastic cooling, both longitudinal and transverse, has offered a chance of further RHIC luminosity improvement. With so many factors involved, a luminosity model would be useful to identify and project gains in the machine development. In this article, a preliminary model is proposed. In Section 2, several secondary factors, which are not yet included in the model, are identified based on the RHIC operation condition and experience in current runs. In Section 3, the RHIC beam store parameters used in the model are listed, and validated. In Section 4, the factors included in the model are discussed, and the luminosity model is presented. In Section 5, typical RHIC gold fills without cooling, and with partial cooling are used for comparison with the model. Then a projection of fills with more coolings is shown. In Section 6, further development of the model is discussed.

  1. Humans Running in Place on Water at Simulated Reduced Gravity

    OpenAIRE

    Alberto E Minetti; Ivanenko, Yuri P.; Germana Cappellini; Nadia Dominici; Francesco Lacquaniti

    2012-01-01

    BACKGROUND: On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on wate...

  2. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke;

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  3. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  4. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    Science.gov (United States)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  5. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  6. Advanced overlay: sampling and modeling for optimized run-to-run control

    Science.gov (United States)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  7. Constrained Run-to-Run Optimization for Batch Process Based on Support Vector Regression Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process. Because support vector machine is powerful for the problems characterized by small samples, nonlinearity, high dimension and local minima, support vector regression models were developed for the end-point optimization of batch processes. Since there is no analytical way to find the optimal trajectory, an iterative method was used to exploit the repetitive nature of batch processes to determine the optimal operating policy. The optimization algorithm is proved convergent. The numerical simulation shows that the method can improve the process performance through iterations.

  8. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. The New Horizon Run Cosmological N-Body Simulations

    CERN Document Server

    Kim, Juhan; Rossi, Graziano; Lee, Sang Min; Gott, J Richard

    2011-01-01

    We present two large cosmological N-body simulations, called Horizon Run 2 (HR2) and Horizon Run 3 (HR3), made using 6000^3 = 216 billions and 7210^3 = 374 billion particles, spanning a volume of (7.200 Gpc/h)^3 and (10.815 Gpc/h)^3, respectively. These simulations improve on our previous Horizon Run 1 (HR1) up to a factor of 4.4 in volume, and range from 2600 to over 8800 times the volume of the Millennium Run. In addition, they achieve a considerably finer mass resolution, down to 1.25x10^11 M_sun/h, allowing to resolve galaxy-size halos with mean particle separations of 1.2 Mpc/h and 1.5 Mpc/h, respectively. We have measured the power spectrum, correlation function, mass function and basic halo properties with percent level accuracy, and verified that they correctly reproduce the LCDM theoretical expectations, in excellent agreement with linear perturbation theory. Our unprecedentedly large-volume N-body simulations can be used for a variety of studies in cosmology and astrophysics, ranging from large-scal...

  10. Running vacuum cosmological models: linear scalar perturbations

    Science.gov (United States)

    Perico, E. L. D.; Tamayo, D. A.

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ(H2) or Λ(R). Such models assume an equation of state for the vacuum given by bar PΛ = - bar rhoΛ, relating its background pressure bar PΛ with its mean energy density bar rhoΛ ≡ Λ/8πG. This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely bar rhoΛ = Σibar rhoΛi. Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ(H2) scenario the vacuum is coupled with every matter component, whereas the Λ(R) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  11. Model for radionuclide transport in running waters

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Karin; Elert, Mark [Kemakta Konsult AB, Stockholm (Sweden)

    2005-11-15

    Two sites in Sweden are currently under investigation by SKB for their suitability as places for deep repository of radioactive waste, the Forsmark and Simpevarp/Laxemar area. As a part of the safety assessment, SKB has formulated a biosphere model with different sub-models for different parts of the ecosystem in order to be able to predict the dose to humans following a possible radionuclide discharge from a future deep repository. In this report, a new model concept describing radionuclide transport in streams is presented. The main difference from the previous model for running water used by SKB, where only dilution of the inflow of radionuclides was considered, is that the new model includes parameterizations also of the exchange processes present along the stream. This is done in order to be able to investigate the effect of the retention on the transport and to be able to estimate the resulting concentrations in the different parts of the system. The concentrations determined with this new model could later be used for order of magnitude predictions of the dose to humans. The presented model concept is divided in two parts, one hydraulic and one radionuclide transport model. The hydraulic model is used to determine the flow conditions in the stream channel and is based on the assumption of uniform flow and quasi-stationary conditions. The results from the hydraulic model are used in the radionuclide transport model where the concentration is determined in the different parts of the stream ecosystem. The exchange processes considered are exchange with the sediments due to diffusion, advective transport and sedimentation/resuspension and uptake of radionuclides in biota. Transport of both dissolved radionuclides and sorbed onto particulates is considered. Sorption kinetics in the stream water phase is implemented as the time scale of the residence time in the stream water probably is short in comparison to the time scale of the kinetic sorption. In the sediment

  12. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  13. Machine-induced Background Simulation Studies for LHC Run 1, Run 2 and HL-LHC

    CERN Document Server

    Kwee-Hinzmann, Regina; Bruce, Roderik; Cerutti, Francesco; Esposito, Luigi Salvatore; Gibson, Stephen; Lechner, Anton; Garcia Morales, Hector; Yin Vallgren, Christina

    2017-01-01

    The study of machine-induced background to the experiments is vital for several reasons. Too much background can be an issue for operation and the difficult part is to judge when exactly “too much” is attained. It is a complex topic as experiments are directly or indirectly affected by conditions all around the LHC ring e.g. collimation settings and vacuum quality. A detailed study of background can also help understanding the machine better to identify potential issues and complemented by dedicated machine tests. Finally such a study is relevant for the experiments to analyse the characteristics of machine background to make sure not to count it into a new physics signal. This report summarises simulation results of three background sources, local beam-gas, beam-halo from the betatron collimation in IR7 and for the first time beam-halo from momentum collimation in IR3. Two of the most dominant sources, local beam-gas and betatron halo, have been systematically studied for LHC Run 1 and Run 2 cases, and ...

  14. Thermodynamical aspects of running vacuum models

    Energy Technology Data Exchange (ETDEWEB)

    Lima, J.A.S. [Universidade de Sao Paulo, Departamento de Astronomia, Sao Paulo (Brazil); Basilakos, Spyros [Academy of Athens, Research Center for Astronomy and Applied Mathematics, Athens (Greece); Sola, Joan [Univ. de Barcelona, High Energy Physics Group, Dept. d' Estructura i Constituents de la Materia, Institut de Ciencies del Cosmos (ICC), Barcelona, Catalonia (Spain)

    2016-04-15

    The thermal history of a large class of running vacuum models in which the effective cosmological term is described by a truncated power series of the Hubble rate, whose dominant term is Λ(H) ∝ H{sup n+2}, is discussed in detail. Specifically, by assuming that the ultrarelativistic particles produced by the vacuum decay emerge into space-time in such a way that its energy density ρ{sub r} ∝ T{sup 4}, the temperature evolution law and the increasing entropy function are analytically calculated. For the whole class of vacuum models explored here we find that the primeval value of the comoving radiation entropy density (associated to effectively massless particles) starts from zero and evolves extremely fast until reaching a maximum near the end of the vacuum decay phase, where it saturates. The late-time conservation of the radiation entropy during the adiabatic FRW phase also guarantees that the whole class of running vacuum models predicts the same correct value of the present day entropy, S{sub 0} ∝ 10{sup 87}-10{sup 88} (in natural units), independently of the initial conditions. In addition, by assuming Gibbons¨CHawking temperature as an initial condition, we find that the ratio between the late-time and primordial vacuum energy densities is in agreement with naive estimates from quantum field theory, namely, ρ{sub Λ0}/ρ{sub ΛI} 10{sup -123}. Such results are independent on the power n and suggests that the observed Universe may evolve smoothly between two extreme, unstable, non-singular de Sitter phases. (orig.)

  15. Short-run and long-run effect of oil consumption on economic growth: ECM model

    Directory of Open Access Journals (Sweden)

    Sofyan Syahnur

    2014-04-01

    Full Text Available The aim of this study is to investigate the effect of oil consumption on economic growth of Aceh in the long-run and short-run by using Error Correction Model (ECM model during the period before the world commodity prices fall of 1985–2008. Four types of oil consumption will be focused on Avtur, Gasoline, Kerosene and Diesel. The data is collected from Central Bureau of Statistics of Aceh (BPS Aceh. The result of this study shows a merely positive effect of oil consumption type diesel to economic growth in Aceh both in the short run and the long run.

  16. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  17. How to help CERN to run more simulations

    CERN Multimedia

    The LHC@home team

    2016-01-01

    With LHC@home you can actively contribute to the computing capacity of the Laboratory!   You may think that CERN's large Data Centre and the Worldwide LHC Computing Grid have enough computing capacity for all the Laboratory’s users. However, given the massive amount of data coming from LHC experiments and other sources, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. This is an area where you can help, by installing BOINC and running simulations from LHC@home on your office PC or laptop. These background simulations will not disturb your work, as BOINC can be configured to automatically stop computing when your PC is in use. As mentioned in earlier editions of the Bulletin (see here and here), contributions from LHC@home volunteers have played a major role in LHC beam simulation studies. The computing capacity they made available corresponds to about half the capacity of the CERN...

  18. Dynamic viscoelastic behavior of lower extremity tendons during simulated running.

    Science.gov (United States)

    De Zee, M; Bojsen-Moller, F; Voigt, M

    2000-10-01

    The aim of this project was to see whether the tendon would show creep during long-term dynamic loading (here referred to as dynamic creep). Pig tendons were loaded by a material-testing machine with a human Achilles tendon force profile (1.37 Hz, 3% strain, 1,600 cycles), which was obtained in an earlier in vivo experiment during running. All the pig tendons showed some dynamic creep during cyclic loading (between 0.23 +/- 0.15 and 0.42 +/- 0.21%, means +/- SD). The pig tendon data were used as an input of a model to predict dynamic creep in the human Achilles tendon during running of a marathon and to evaluate whether there might consequently be an influence on group Ia afferent-mediated length and velocity feedback from muscle spindles. The predicted dynamic creep in the Achilles tendon was considered to be too small to have a significant influence on the length and velocity feedback from soleus during running. In spite of the characteristic nonlinear viscoelastic behavior of tendons, our results demonstrate that these properties have a minor effect on the ability of tendons to act as predictable, stable, and elastic force transmitters during long-term cyclic loading.

  19. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    Directory of Open Access Journals (Sweden)

    A. Najafi-Jilani

    2014-06-01

    Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  20. Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment

    Science.gov (United States)

    Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat

    2017-08-01

    The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.

  1. Long-run properties of some Danish macroeconometric models

    DEFF Research Database (Denmark)

    Harck, Søren H.

    This paper provides an analytical treatment of various long-run aspects of the MONA model as well as the SMEC model of the Danish economy. More specifically, the analysis lays bare the long-run and steady-state nexus between unemployment and, respectively, inflation and the wage share implied...

  2. Simulating Ideal Assistive Devices to Reduce the Metabolic Cost of Running

    Science.gov (United States)

    Uchida, Thomas K.; Seth, Ajay; Pouya, Soha; Dembia, Christopher L.; Hicks, Jennifer L.; Delp, Scott L.

    2016-01-01

    Tools have been used for millions of years to augment the capabilities of the human body, allowing us to accomplish tasks that would otherwise be difficult or impossible. Powered exoskeletons and other assistive devices are sophisticated modern tools that have restored bipedal locomotion in individuals with paraplegia and have endowed unimpaired individuals with superhuman strength. Despite these successes, designing assistive devices that reduce energy consumption during running remains a substantial challenge, in part because these devices disrupt the dynamics of a complex, finely tuned biological system. Furthermore, designers have hitherto relied primarily on experiments, which cannot report muscle-level energy consumption and are fraught with practical challenges. In this study, we use OpenSim to generate muscle-driven simulations of 10 human subjects running at 2 and 5 m/s. We then add ideal, massless assistive devices to our simulations and examine the predicted changes in muscle recruitment patterns and metabolic power consumption. Our simulations suggest that an assistive device should not necessarily apply the net joint moment generated by muscles during unassisted running, and an assistive device can reduce the activity of muscles that do not cross the assisted joint. Our results corroborate and suggest biomechanical explanations for similar effects observed by experimentalists, and can be used to form hypotheses for future experimental studies. The models, simulations, and software used in this study are freely available at simtk.org and can provide insight into assistive device design that complements experimental approaches. PMID:27656901

  3. Pairwise velocities in the "Running FLRW" cosmological model

    Science.gov (United States)

    Bibiano, Antonio; Croton, Darren J.

    2017-01-01

    We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the "Running Friedmann-Lemaître-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends ΛCDM with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various Coupled Dark Energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM which could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.

  4. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh landslide-induced tsunamis. The extension is straight forward, requiring only....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  5. Thermoregulation and endurance running in extinct hominins: Wheeler's models revisited.

    Science.gov (United States)

    Ruxton, Graeme D; Wilkinson, David M

    2011-08-01

    Thermoregulation is often cited as a potentially important influence on the evolution of hominins, thanks to a highly influential series of papers in the Journal of Human Evolution in the 1980s and 1990s by Peter Wheeler. These papers developed quantitative modeling of heat balance between different potential hominins and their environment. Here, we return to these models, update them in line with new developments and measurements in animal thermal biology, and modify them to represent a running hominin rather than the stationary form considered previously. In particular, we use our modified Wheeler model to investigate thermoregulatory aspects of the evolution of endurance running ability. Our model suggests that for endurance running to be possible, a hominin would need locomotive efficiency, sweating rates, and areas of hairless skin similar to modern humans. We argue that these restrictions suggest that endurance running may have been possible (from a thermoregulatory viewpoint) for Homo erectus, but is unlikely for any earlier hominins.

  6. Modelling surface run-off and trends analysis over India

    Science.gov (United States)

    Gupta, P. K.; Chauhan, S.; Oza, M. P.

    2016-08-01

    The present study is mainly concerned with detecting the trend of run-off over the mainland of India, during a time period of 35 years, from 1971-2005 (May-October). Rainfall, soil texture, land cover types, slope, etc., were processed and run-off modelling was done using the Natural Resources Conservation Service (NRCS) model with modifications and cell size of 5×5 km. The slope and antecedent moisture corrections were incorporated in the existing model. Trend analysis of estimated run-off was done by taking into account different analysis windows such as cell, medium and major river basins, meteorological sub-divisions and elevation zones across India. It was estimated that out of the average 1012.5 mm of rainfall over India (considering the study period of 35 years), 33.8% got converted to surface run-off. An exponential model was developed between the rainfall and the run-off that predicted the run-off with an R 2 of 0.97 and RMSE of 8.31 mm. The run-off trend analysed using the Mann-Kendall test revealed that a significant pattern exists in 22 medium, two major river basins and three meteorological sub-divisions, while there was no evidence of a statistically significant trend in the elevation zones. Among the medium river basins, the highest positive rate of change in the run-off was observed in the Kameng basin (13.6 mm/yr), while the highest negative trend was observed in the Tista upstream basin (-21.4 mm/yr). Changes in run-off provide valuable information for understanding the region's sensitivity to climatic variability.

  7. Modelling surface run-off and trends analysis over India

    Indian Academy of Sciences (India)

    P K Gupta; S Chauhan; M P Oza

    2016-08-01

    The present study is mainly concerned with detecting the trend of run-off over the mainland of India, during a time period of 35 years, from 1971–2005 May–October). Rainfall, soil texture, land cover types, slope, etc., were processed and run-off modelling was done using the Natural Resources ConservationService (NRCS) model with modifications and cell size of 5×5 km. The slope and antecedent moisture corrections were incorporated in the existing model. Trend analysis of estimated run-off was done by taking into account different analysis windows such as cell, medium and major river basins, meteorologicalsub-divisions and elevation zones across India. It was estimated that out of the average 1012.5 mm of rainfall over India (considering the study period of 35 years), 33.8% got converted to surface run-off. An exponential model was developed between the rainfall and the run-off that predicted the run-off with an $R^2$ of 0.97 and RMSE of 8.31 mm. The run-off trend analysed using the Mann–Kendall test revealed that a significant pattern exists in 22 medium, two major river basins and three meteorological subdivisions, while there was no evidence of a statistically significant trend in the elevation zones. Among the medium river basins, the highest positive rate of change in the run-off was observed in the Kameng basin (13.6 mm/yr), while the highest negative trend was observed in the Tista upstream basin (−21.4 mm/yr). Changes in run-off provide valuable information for understanding the region’s sensitivity to climatic variability.

  8. Dynamics of the in-run in ski jumping: a simulation study.

    Science.gov (United States)

    Ettema, Gertjan J C; Bråten, Steinar; Bobbert, Maarten F

    2005-08-01

    A ski jumper tries to maintain an aerodynamic position in the in-run during changing environmental forces. The purpose of this study was to analyze the mechanical demands on a ski jumper taking the in-run in a static position. We simulated the in-run in ski jumping with a 4-segment forward dynamic model (foot, leg, thigh, and upper body). The curved path of the in-run was used as kinematic constraint, and drag, lift, and snow friction were incorporated. Drag and snow friction created a forward rotating moment that had to be counteracted by a plantar flexion moment and caused the line of action of the normal force to pass anteriorly to the center of mass continuously. The normal force increased from 0.88 G on the first straight to 1.65 G in the curve. The required knee joint moment increased more because of an altered center of pressure. During the transition from the straight to the curve there was a rapid forward shift of the center of pressure under the foot, reflecting a short but high angular acceleration. Because unrealistically high rates of change of moment are required, an athlete cannot do this without changing body configuration which reduces the required rate of moment changes.

  9. Terror birds on the run: a mechanical model to estimate its maximum running speed

    Science.gov (United States)

    Blanco, R. Ernesto; Jones, Washington W

    2005-01-01

    ‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087

  10. Running On-Demand Strong Ground Motion Simulations with the Second-Generation Broadband Platform

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Graves, R. W.; Somerville, P. G.; Collins, N.; Olsen, K. B.; Imperatori, W.; Jones, M.; Archuleta, R. J.; Schmedes, J.; Jordan, T. H.; Broadband Platform Working Group

    2010-12-01

    We have developed the second-generation Southern California Earthquake Center (SCEC) Broadband Platform by integrating scientific modeling codes into a system capable of computing broadband seismograms (0-10 Hz) for historical and scenario earthquakes in California. The SCEC Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment (SCEC/CME) software development group. SCEC scientific groups have contributed software modules to the Broadband Platform including rupture generation, low-frequency deterministic seismogram synthesis, high-frequency stochastic seismogram synthesis, and non-linear site effects. These complex scientific codes have been integrated into a system that supports easy on-demand computation of broadband seismograms. The SCEC Broadband Platform is designed to be used by both scientific and engineering researchers familiar with ground motion simulations. Users may calculate broadband seismograms for both historical earthquakes (validation events including Northridge, Loma Prieta, and Landers) and user-defined earthquakes. Users may select among various codebases for rupture generation, low-frequency synthesis, high-frequency synthesis, and incorporation of site effects, with the option of running a goodness-of-fit comparison against observed or simulated seismograms. The platform produces a variety of data products, including broadband seismograms, rupture visualizations, and goodness-of-fit plots. The Broadband Platform was implemented using software development best practices that support software accuracy, reliability, and ease of use, including version control, user documentation, acceptance tests, and formal software releases. Users can install the platform on their own machine, verify that it is installed correctly, and run their own simulations on demand. The Broadband Platform enables users to run complex ground motion modeling codes without

  11. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    Science.gov (United States)

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  12. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke;

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  13. Numerical Modelling of Wave Run-Up: Regular Waves

    DEFF Research Database (Denmark)

    Ramirez, Jorge; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  14. 12 weeks of simulated barefoot running changes foot-strike patterns in female runners.

    Science.gov (United States)

    McCarthy, C; Fleming, N; Donne, B; Blanksby, B

    2014-05-01

    To investigate the effect of a transition program of simulated barefoot running (SBR) on running kinematics and foot-strike patterns, female recreational athletes (n=9, age 29 ± 3 yrs) without SBR experience gradually increased running distance in Vibram FiveFingers SBR footwear over 12 weeks. Matched controls (n=10, age 30 ± 4 yrs) continued running in standard footwear. A 3-D motion analysis of treadmill running at 12 km/h(-1) was performed by both groups, barefoot and shod, pre- and post-intervention. Post-intervention data indicated a more-forefoot strike pattern in the SBR group compared to controls; both running barefoot (P>0.05), and shod (Pbarefoot, there were significant kinematic differences across time in the SBR group for ankle flexion angle at toe-off (Pbarefoot" kinematics, regardless of preferred footwear.

  15. Matter density perturbation and power spectrum in running vacuum model

    CERN Document Server

    Geng, Chao-Qiang

    2016-01-01

    We investigate the matter density perturbation $\\delta_m$ and power spectrum $P(k)$ in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by $\\Lambda = \\Lambda_0 + 6 \\sigma H H_0+ 3\

  16. Simulations of flow and prediction of sediment movement in Wymans Run, Cochranton Borough, Crawford County, Pennsylvania

    Science.gov (United States)

    Hittle, Elizabeth

    2011-01-01

    In small watersheds, runoff entering local waterways from large storms can cause rapid and profound changes in the streambed that can contribute to flooding. Wymans Run, a small stream in Cochranton Borough, Crawford County, experienced a large rain event in June 2008 that caused sediment to be deposited at a bridge. A hydrodynamic model, Flow and Sediment Transport and Morphological Evolution of Channels (FaSTMECH), which is incorporated into the U.S. Geological Survey Multi-Dimensional Surface-Water Modeling System (MD_SWMS) was constructed to predict boundary shear stress and velocity in Wymans Run using data from the June 2008 event. Shear stress and velocity values can be used to indicate areas of a stream where sediment, transported downstream, can be deposited on the streambed. Because of the short duration of the June 2008 rain event, streamflow was not directly measured but was estimated using U.S. Army Corps of Engineers one-dimensional Hydrologic Engineering Centers River Analysis System (HEC-RAS). Scenarios to examine possible engineering solutions to decrease the amount of sediment at the bridge, including bridge expansion, channel expansion, and dredging upstream from the bridge, were simulated using the FaSTMECH model. Each scenario was evaluated for potential effects on water-surface elevation, boundary shear stress, and velocity.

  17. A VRLA battery simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Pascoe, P.E.; Anbuky, A.H. [Invensys Energy Systems NZ Limited, Christchurch (New Zealand)

    2004-05-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet. (author)

  18. Test of the classic model for predicting endurance running performance.

    Science.gov (United States)

    McLaughlin, James E; Howley, Edward T; Bassett, David R; Thompson, Dixie L; Fitzhugh, Eugene C

    2010-05-01

    To compare the classic physiological variables linked to endurance performance (VO2max, %VO2max at lactate threshold (LT), and running economy (RE)) with peak treadmill velocity (PTV) as predictors of performance in a 16-km time trial. Seventeen healthy, well-trained distance runners (10 males and 7 females) underwent laboratory testing to determine maximal oxygen uptake (VO2max), RE, percentage of maximal oxygen uptake at the LT (%VO2max at LT), running velocity at LT, and PTV. Velocity at VO2max (vVO2max) was calculated from RE and VO2max. Three stepwise regression models were used to determine the best predictors (classic vs treadmill performance protocols) for the 16-km running time trial. Simple Pearson correlations of the variables with 16-km performance showed vVO2max to have the highest correlation (r = -0.972) and %VO2max at the LT the lowest (r = 0.136). The correlation coefficients for LT, VO2max, and PTV were very similar in magnitude (r = -0.903 to r = -0.892). When VO2max, %VO2max at LT, RE, and PTV were entered into SPSS stepwise analysis, VO2max explained 81.3% of the total variance, and RE accounted for an additional 10.7%. vVO2max was shown to be the best predictor of the 16-km performance, accounting for 94.4% of the total variance. The measured velocity at VO2max (PTV) was highly correlated with the estimated velocity at vVO2max (r = 0.8867). Among well-trained subjects heterogeneous in VO2max and running performance, vVO2max is the best predictor of running performance because it integrates both maximal aerobic power and the economy of running. The PTV is linked to the same physiological variables that determine vVO2max.

  19. Comparison of minimalist footwear strategies for simulating barefoot running: a randomized crossover study.

    Directory of Open Access Journals (Sweden)

    Karsten Hollander

    Full Text Available Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1 performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1 using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001 at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001 between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1, followed by running with uncushioned minimalist shoes (62.9%, cushioned minimalist (88.6% and standard shoes (94.3%. Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.

  20. Variations in Hypoxia Impairs Muscle Oxygenation and Performance during Simulated Team-Sport Running.

    Science.gov (United States)

    Sweeting, Alice J; Billaut, François; Varley, Matthew C; Rodriguez, Ramón F; Hopkins, William G; Aughey, Robert J

    2017-01-01

    Purpose: To quantify the effect of acute hypoxia on muscle oxygenation and power during simulated team-sport running. Methods: Seven individuals performed repeated and single sprint efforts, embedded in a simulated team-sport running protocol, on a non-motorized treadmill in normoxia (sea-level), and acute normobaric hypoxia (simulated altitudes of 2,000 and 3,000 m). Mean and peak power was quantified during all sprints and repeated sprints. Mean total work, heart rate, blood oxygen saturation, and quadriceps muscle deoxyhaemoglobin concentration (assessed via near-infrared spectroscopy) were measured over the entire protocol. A linear mixed model was used to estimate performance and physiological effects across each half of the protocol. Changes were expressed in standardized units for assessment of magnitude. Uncertainty in the changes was expressed as a 90% confidence interval and interpreted via non-clinical magnitude-based inference. Results: Mean total work was reduced at 2,000 m (-10%, 90% confidence limits ±6%) and 3,000 m (-15%, ±5%) compared with sea-level. Mean heart rate was reduced at 3,000 m compared with 2,000 m (-3, ±3 min(-1)) and sea-level (-3, ±3 min(-1)). Blood oxygen saturation was lower at 2,000 m (-8, ±3%) and 3,000 m (-15, ±2%) compared with sea-level. Sprint mean power across the entire protocol was reduced at 3,000 m compared with 2,000 m (-12%, ±3%) and sea-level (-14%, ±4%). In the second half of the protocol, sprint mean power was reduced at 3,000 m compared to 2,000 m (-6%, ±4%). Sprint mean peak power across the entire protocol was lowered at 2,000 m (-10%, ±6%) and 3,000 m (-16%, ±6%) compared with sea-level. During repeated sprints, mean peak power was lower at 2,000 m (-8%, ±7%) and 3,000 m (-8%, ±7%) compared with sea-level. In the second half of the protocol, repeated sprint mean power was reduced at 3,000 m compared to 2,000 m (-7%, ±5%) and sea-level (-9%, ±5%). Quadriceps muscle deoxyhaemoglobin concentration

  1. PEP Run Report for Simulant Shakedown/Functional Testing

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Geeting, John GH; Bredt, Ofelia P.; Burns, Carolyn A.; Golovich, Elizabeth C.; Guzman-Leong, Consuelo E.; Kurath, Dean E.; Sevigny, Gary J.

    2009-12-29

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Waste Treatment Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed, and operated as part of a plan to respond to issue M12, "Undemonstrated Leaching Processes." The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing. Two operating scenarios are currently being evaluated for the ultrafiltration process (UFP) and leaching operations. The first scenario has caustic leaching performed in the UFP-2 ultrafiltration feed vessels (i.e., vessel UFP-VSL-T02A in the PEP; and vessels UFP-VSL-00002A and B in the WTP PTF). The second scenario has caustic leaching conducted in the UFP-1 ultrafiltration feed preparation vessels (i.e., vessels UFP-VSL-T01A and B in the PEP; vessels UFP-VSL-00001A and B in the WTP PTF). In both scenarios, 19-M sodium hydroxide solution (NaOH, caustic) is added to the waste slurry in the vessels to leach solid aluminum compounds (e.g., gibbsite, boehmite). Caustic addition is followed by a heating step that uses direct injection of steam to accelerate the leach process. Following the caustic leach, the vessel contents are cooled using vessel cooling jackets and/or external heat exchangers. The main difference between the two scenarios is that for leaching in UFP-1, the 19-M NaOH is added to un-concentrated waste slurry (3-8 wt% solids), while for leaching in UFP-2, the slurry is concentrated to nominally 20 wt% solids using cross-flow ultrafiltration

  2. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model

    Science.gov (United States)

    Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles. PMID:28118401

  3. Data-driven modelling of vertical dynamic excitation of bridges induced by people running

    Science.gov (United States)

    Racic, Vitomir; Morin, Jean Benoit

    2014-02-01

    With increasingly popular marathon events in urban environments, structural designers face a great deal of uncertainty when assessing dynamic performance of bridges occupied and dynamically excited by people running. While the dynamic loads induced by pedestrians walking have been intensively studied since the infamous lateral sway of the London Millennium Bridge in 2000, reliable and practical descriptions of running excitation are still very rare and limited. This interdisciplinary study has addressed the issue by bringing together a database of individual running force signals recorded by two state-of-the-art instrumented treadmills and two attempts to mathematically describe the measurements. The first modelling strategy is adopted from the available design guidelines for human walking excitation of structures, featuring perfectly periodic and deterministic characterisation of pedestrian forces presentable via Fourier series. This modelling approach proved to be inadequate for running loads due to the inherent near-periodic nature of the measured signals, a great inter-personal randomness of the dominant Fourier amplitudes and the lack of strong correlation between the amplitudes and running footfall rate. Hence, utilising the database established and motivated by the existing models of wind and earthquake loading, speech recognition techniques and a method of replicating electrocardiogram signals, this paper finally presents a numerical generator of random near-periodic running force signals which can reliably simulate the measurements. Such a model is an essential prerequisite for future quality models of dynamic loading induced by individuals, groups and crowds running under a wide range of conditions, such as perceptibly vibrating bridges and different combinations of visual, auditory and tactile cues.

  4. Comparison of minimalist footwear strategies for simulating barefoot running: a randomized crossover study.

    Science.gov (United States)

    Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid

    2015-01-01

    Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1)) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1)) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (pbarefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1)), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.

  5. Linking Fish Habitat Modelling and Sediment Transport in Running Waters

    Institute of Scientific and Technical Information of China (English)

    Andreas; EISNER; Silke; WIEPRECHT; Matthias; SCHNEIDER

    2005-01-01

    The assessment of ecological status for running waters is one of the major issues within an integrated river basin management and plays a key role with respect to the implementation of the European Water Frame- work Directive (WFD).One of the tools supporting the development of sustainable river management is physi- cal habitat modeling,e.g.,for fish,because fish population are one of the most important indicators for the e- colngical integrity of rivers.Within physical habitat models hydromorphological ...

  6. Effects of running with backpack loads during simulated gravitational transitions: Improvements in postural control

    Science.gov (United States)

    Brewer, Jeffrey David

    The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.

  7. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    Science.gov (United States)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  8. A model-experiment comparison of system dynamics for human walking and running.

    Science.gov (United States)

    Lipfert, Susanne W; Günther, Michael; Renjewski, Daniel; Grimmer, Sten; Seyfarth, Andre

    2012-01-07

    The human musculo-skeletal system comprises high complexity which makes it difficult to identify underlying basic principles of bipedal locomotion. To tackle this challenge, a common approach is to strip away complexity and formulate a reductive model. With utter simplicity a bipedal spring-mass model gives good predictions of the human gait dynamics, however, it has not been fully investigated whether center of mass motion over time of walking and running is comparable between the model and the human body over a wide range of speed. To test the model's ability in this respect, we compare sagittal center of mass trajectories of model and human data for speeds ranging from 0.5 m/s to 4 m/s. For simulations, system parameters and initial conditions are extracted from experimental observations of 28 subjects. The leg parameters stiffness and length are extracted from functional fitting to the subjects' leg force-length curves. With small variations of the touch-down angle of the leg and the vertical position of the center of mass at apex, we find successful spring-mass simulations for moderate walking and medium running speeds. Predictions of the sagittal center of mass trajectories and ground reaction forces are good, but their amplitudes are overestimated, while contact time is underestimated. At faster walking speeds and slower running speeds we do not find successful model locomotion with the extent of allowed parameter variation. We conclude that the existing limitations may be improved by adding complexity to the model.

  9. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building.

  10. On Demand Runs Of Mesoscale Models : Météo-France multi-mission, multi-support GUI

    Science.gov (United States)

    Periard, C.; Pourret, V.; Chaupin, D.

    2009-09-01

    Numerous experiment campaigns have shown the interest of mesoscale models to represent weather conditions of the atmosphere as a support to various applications, from electromagnetic propagation to wind power atlas. However running mesoscale models requires high level knowledge on computing and modelling to define the different parameters for a given simulation. With the increase of the demands for mesoscale simulations, we decided to develop a GUI that enables to easily define and run type-experiments Ø at any location on the globe Ø on different types of computers (from Meteo-France Fujitsu to a PC Cluster) Ø with different choices of forcing models. The GUI developed in PHP, uses a map server to visualize the location of the experiment being defined and the different forcing models available for the simulation. The other parameters such as time steps, resolutions, sizes and number of embedded domains, etc … can be modified through checkboxes or multiple choices lists in the GUI. So far, the GUI has been used to run 3 different types of experiment : Ø for EM propagation purpose, during an experiment campaign near Toulon : the simulations were run on a PC Cluster in analyse mode. Ø for wind profiles prediction, in Afghanistan : the simulations are run on the Fujitsu in forecast mode. Ø for weather forecast, during a the F1 race in Japan : the simulations were run on a PC Cluster in forecast mode. During the presentation, I will first give some screen-prints of the different fill-in forms of the Gui and the way to define an experiment. Then I will focus on the 3 examples mentioned above showing different types of graphs and maps produced. There are tons of other applications where this tool is going to be useful especially in climatology: using weather type classification and downscaling, the Gui will help run the simulations of the different clusters representatives . The last thing to accomplish is find a name for the tool.

  11. The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy

    Directory of Open Access Journals (Sweden)

    Nathaniel T. Berry, Laurie Wideman, Edgar W. Shields, Claudio L. Battaglini

    2016-06-01

    Full Text Available Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU Duathlon simulation on maximal oxygen uptake (VO2max, ventilatory threshold (VT and running economy (RE within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max, maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01. This decrease was coupled with a significant reduction in running speed at VT (p = 0.015. A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001. The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds.

  12. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  13. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  14. Statistics for long irregular wave run-up on a plane beach from direct numerical simulations

    Science.gov (United States)

    Didenkulova, Ira; Senichev, Dmitry; Dutykh, Denys

    2017-04-01

    Very often for global and transoceanic events, due to the initial wave transformation, refraction, diffraction and multiple reflections from coastal topography and underwater bathymetry, the tsunami approaches the beach as a very long wave train, which can be considered as an irregular wave field. The prediction of possible flooding and properties of the water flow on the coast in this case should be done statistically taking into account the formation of extreme (rogue) tsunami wave on a beach. When it comes to tsunami run-up on a beach, the most used mathematical model is the nonlinear shallow water model. For a beach of constant slope, the nonlinear shallow water equations have rigorous analytical solution, which substantially simplifies the mathematical formulation. In (Didenkulova et al. 2011) we used this solution to study statistical characteristics of the vertical displacement of the moving shoreline and its horizontal velocity. The influence of the wave nonlinearity was approached by considering modifications of probability distribution of the moving shoreline and its horizontal velocity for waves of different amplitudes. It was shown that wave nonlinearity did not affect the probability distribution of the velocity of the moving shoreline, while the vertical displacement of the moving shoreline was affected substantially demonstrating the longer duration of coastal floods with an increase in the wave nonlinearity. However, this analysis did not take into account the actual transformation of irregular wave field offshore to oscillations of the moving shoreline on a slopping beach. In this study we would like to cover this gap by means of extensive numerical simulations. The modeling is performed in the framework of nonlinear shallow water equations, which are solved using a modern shock-capturing finite volume method. Although the shallow water model does not pursue the wave breaking and bore formation in a general sense (including the water surface

  15. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    Science.gov (United States)

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  16. Horizon Run 4 Simulation: Coupled Evolution of Galaxies and Large-scale Structures of the Universe

    CERN Document Server

    Kim, Juhan; L'Huillier, Benjamin; Hong, Sungwook E

    2015-01-01

    The Horizon Run 4 is a cosmological $N$-body simulation designed for the study of coupled evolution between galaxies and large-scale structures of the Universe, and for the test of galaxy formation models. Using $6300^3$ gravitating particles in a cubic box of $L_{\\rm box} = 3150 ~h^{-1}{\\rm Mpc}$, we build a dense forest of halo merger trees to trace the halo merger history with a halo mass resolution scale down to $M_s = 2.7 \\times 10^{11} h^{-1}{\\rm M_\\odot}$. We build a set of particle and halo data, which can serve as testbeds for comparison of cosmological models and gravitational theories with observations. We find that the FoF halo mass function shows a substantial deviation from the universal form with tangible redshift evolution of amplitude and shape. At higher redshifts, the amplitude of the mass function is lower, and the functional form is shifted toward larger values of $\\ln (1/\\sigma)$. We also find that the baryonic acoustic oscillation feature in the two-point correlation function of mock ga...

  17. Integrating spatio-temporal environmental models for planning ski runs

    NARCIS (Netherlands)

    Pfeffer, Karin

    2003-01-01

    The establishment of ski runs and ski lifts, the action of skiing and maintenance of ski runs may cause considerable environmental impact. Clearly, for improvements to be made in the planning of ski runs in alpine terrain a good understanding of the environmental system and the response of environme

  18. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    Directory of Open Access Journals (Sweden)

    E.Makarov

    2016-07-01

    Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels

  19. CAUSA - An Environment For Modeling And Simulation

    Science.gov (United States)

    Dilger, Werner; Moeller, Juergen

    1989-03-01

    CAUSA is an environment for modeling and simulation of dynamic systems on a quantitative level. The environment provides a conceptual framework including primitives like objects, processes and causal dependencies which allow the modeling of a broad class of complex systems. The facility of simulation allows the quantitative and qualitative inspection and empirical investigation of the behavior of the modeled system. CAUSA is implemented in Knowledge-Craft and runs on a Symbolics 3640.

  20. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  1. Simulation of uphill/downhill running on a level treadmill using additional horizontal force.

    Science.gov (United States)

    Gimenez, Philippe; Arnal, Pierrick J; Samozino, Pierre; Millet, Guillaume Y; Morin, Jean-Benoit

    2014-07-18

    Tilting treadmills allow a convenient study of biomechanics during uphill/downhill running, but they are not commonly available and there is even fewer tilting force-measuring treadmill. The aim of the present study was to compare uphill/downhill running on a treadmill (inclination of ± 8%) with running on a level treadmill using additional backward or forward pulling forces to simulate the effect of gravity. This comparison specifically focused on the energy cost of running, stride frequency (SF), electromyographic activity (EMG), leg and foot angles at foot strike, and ground impact shock. The main results are that SF, impact shock, and leg and foot angle parameters determined were very similar and significantly correlated between the two methods, the intercept and slope of the linear regression not differing significantly from zero and unity, respectively. The correlation of oxygen uptake (V̇O2) data between both methods was not significant during uphill running (r=0.42; P>0.05). V̇O2 data were correlated during downhill running (r=0.74; P<0.01) but there was a significant difference between the methods (bias=-2.51 ± 1.94 ml min(-1) kg(-1)). Linear regressions for EMG of vastus lateralis, biceps femoris, gastrocnemius lateralis, soleus and tibialis anterior were not different from the identity line but the systematic bias was elevated for this parameter. In conclusion, this method seems appropriate for the study of SF, leg and foot angle, impact shock parameters but is less applicable for physiological variables (EMG and energy cost) during uphill/downhill running when using a tilting force-measuring treadmill is not possible. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Tsunami generation, propagation, and run-up with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2009-01-01

    In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh tsunamis. The extension is straight forward, requiring only...... show that the long-time (fully nonlinear) evolution of waves resulting from an upthrusted bottom can eventually result in true solitary waves, consistent with theoretical predictions. It is stressed, however, that the nonlinearity used far exceeds that typical of geophysical tsunamis in the open ocean....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...

  3. Hydrologic and water-quality characterization and modeling of the Chenoweth Run basin, Jefferson County, Kentucky

    Science.gov (United States)

    Martin, Gary R.; Zarriello, Phillip J.; Shipp, Allison A.

    2001-01-01

    Rainfall, streamflow, and water-quality data collected in the Chenoweth Run Basin during February 1996?January 1998, in combination with the available historical sampling data, were used to characterize hydrologic conditions and to develop and calibrate a Hydrological Simulation Program?Fortran (HSPF) model for continuous simulation of rainfall, streamflow, suspended-sediment, and total-orthophosphate (TPO4) transport relations. Study results provide an improved understanding of basin hydrology and a hydrologic-modeling framework with analytical tools for use in comprehensive waterresource planning and management. Chenoweth Run Basin, encompassing 16.5 mi2 in suburban eastern Jefferson County, Kentucky, contains expanding urban development, particularly in the upper third of the basin. Historical water-quality problems have interfered with designated aquatic-life and recreation uses in the stream main channel (approximately 9 mi in length) and have been attributed to organic enrichment, nutrients, metals, and pathogens in urban runoff and wastewater inflows. Hydrologic conditions in Jefferson County are highly varied. In the Chenoweth Run Basin, as in much of the eastern third of the county, relief is moderately sloping to steep. Also, internal drainage in pervious areas is impeded by the shallow, fine-textured subsoils that contain abundant silts and clays. Thus, much of the precipitation here tends to move rapidly as overland flow and (or) shallow subsurface flow (interflow) to the stream channels. Data were collected at two streamflowgaging stations, one rain gage, and four waterquality- sampling sites in the basin. Precipitation, streamflow, and, consequently, constituent loads were above normal during the data-collection period of this study. Nonpoint sources contributed the largest portion of the sediment loads. However, the three wastewatertreatment plants (WWTP?s) were the source of the majority of estimated total phosphorus (TP) and TPO4 transport

  4. Soleus H-reflex gain in humans walking and running under simulated reduced gravity

    Science.gov (United States)

    Ferris, D. P.; Aagaard, P.; Simonsen, E. B.; Farley, C. T.; Dyhre-Poulsen, P.

    2001-01-01

    The Hoffmann (H-) reflex is an electrical analogue of the monosynaptic stretch reflex, elicited by bypassing the muscle spindle and directly stimulating the afferent nerve. Studying H-reflex modulation provides insight into how the nervous system centrally modulates stretch reflex responses.A common measure of H-reflex gain is the slope of the relationship between H-reflex amplitude and EMG amplitude. To examine soleus H-reflex gain across a range of EMG levels during human locomotion, we used simulated reduced gravity to reduce muscle activity. We hypothesised that H-reflex gain would be independent of gravity level.We recorded EMG from eight subjects walking (1.25 m s-1) and running (3.0 m s-1) at four gravity levels (1.0, 0.75, 0.5 and 0.25 G (Earth gravity)). We normalised the stimulus M-wave and resulting H-reflex to the maximal M-wave amplitude (Mmax) elicited throughout the stride to correct for movement of stimulus and recording electrodes relative to nerve and muscle fibres. Peak soleus EMG amplitude decreased by 30% for walking and for running over the fourfold change in gravity. As hypothesised, slopes of linear regressions fitted to H-reflex versus EMG data were independent of gravity for walking and running (ANOVA, P > 0.8). The slopes were also independent of gait (P > 0.6), contrary to previous studies. Walking had a greater y-intercept (19.9% Mmax) than running (-2.5% Mmax; P EMG, walking H-reflex amplitudes were higher than running H-reflex amplitudes by a constant amount. We conclude that the nervous system adjusts H-reflex threshold but not H-reflex gain between walking and running. These findings provide insight into potential neural mechanisms responsible for spinal modulation of the stretch reflex during human locomotion.

  5. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2016-10-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behavior as that in the ΛCDM model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > ( matter and dark energy. It is clear that the observational data rule out the cases with ν < 0 and ν + σ < 0, while the allowed window for the model parameters is extremely narrow with ν , |σ | ≲ {O}(10^{-7}).

  6. Cosmological models with running cosmological term and decaying dark matter

    Science.gov (United States)

    Szydłowski, Marek; Stachowski, Aleksander

    2017-03-01

    We investigate the dynamics of the generalized ΛCDM model, which the Λ term is running with the cosmological time. On the example of the model Λ(t) =Λbare + α2/t2 we show the existence of a mechanism of the modification of the scaling law for energy density of dark matter: ρdm ∝a - 3 + λ(t). We use an approach developed by Urbanowski in which properties of unstable vacuum states are analyzed from the point of view of the quantum theory of unstable states. We discuss the evolution of Λ(t) term and pointed out that during the cosmic evolution there is a long phase in which this term is approximately constant. We also present the statistical analysis of both the Λ(t) CDM model with dark energy and decaying dark matter and the ΛCDM standard cosmological model. We use data such as Planck, SNIa, BAO, H(z) and AP test. While for the former we find the best fit value of the parameter Ωα2,0 is negative (energy transfer is from the dark matter to dark energy sector) and the parameter Ωα2,0 belongs to the interval (- 0 . 000040 , - 0 . 000383) at 2- σ level. The decaying dark matter causes to lowering a mass of dark matter particles which are lighter than CDM particles and remain relativistic. The rate of the process of decaying matter is estimated. Our model is consistent with the decaying mechanism producing unstable particles (e.g. sterile neutrinos) for which α2 is negative.

  7. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    Energy Technology Data Exchange (ETDEWEB)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-09

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model.

  8. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2017-01-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model, with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub-interaction and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behaviour as that in the Λ cold dark model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > (extremely narrow with ν , |σ | ≲ O(10^{-7}).

  9. First evidence of running cosmic vacuum: challenging the concordance model

    CERN Document Server

    Sola, Joan; Perez, Javier de Cruz

    2016-01-01

    Despite the fact that a rigid $\\Lambda$-term is a fundamental building block of the concordance $\\Lambda$CDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density $\\rho_{\\Lambda}$ and/or gravitational coupling $G$, together with a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the $\\Lambda$CDM. In this Letter, we discuss these "running vacuum models" (RVM's), in which $\\rho_{\\Lambda}=\\rho_{\\Lambda}(H)$ consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding Universe. By performing an overall fit to the cosmological observables $SNIa+BAO+H(z)+LSS+BBN+CMB$ (in which the WMAP9, Planck 2013 and Planck 2015 data are taken into account), we find that the RVM's appear definitely more favored than the $\\Lambda$CDM, namely at an unprecedented level of $\\sim 4\\sigma$, implyi...

  10. Effects of Yaw Error on Wind Turbine Running Characteristics Based on the Equivalent Wind Speed Model

    Directory of Open Access Journals (Sweden)

    Shuting Wan

    2015-06-01

    Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.

  11. First Evidence of Running Cosmic Vacuum: Challenging the Concordance Model

    Science.gov (United States)

    Solà, Joan; Gómez-Valent, Adrià; de Cruz Pérez, Javier

    2017-02-01

    Despite the fact that a rigid {{Λ }}-term is a fundamental building block of the concordance ΛCDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density {ρ }{{Λ }} together with a dynamical gravitational coupling G or a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the ΛCDM. In this paper, we discuss these “running vacuum models” (RVMs), in which {ρ }{{Λ }}={ρ }{{Λ }}(H) consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding universe. By performing an overall fit to the cosmological observables SN Ia+BAO+H(z)+LSS+BBN+CMB (in which the WMAP9, Planck 2013, and Planck 2015 data are taken into account), we find that the class of RVMs appears significantly more favored than the ΛCDM, namely, at an unprecedented level of ≳ 4.2σ . Furthermore, the Akaike and Bayesian information criteria confirm that the dynamical RVMs are strongly preferred compared to the conventional rigid {{Λ }}-picture of the cosmic evolution.

  12. The running coupling of the minimal sextet composite Higgs model

    CERN Document Server

    Fodor, Zoltan; Kuti, Julius; Mondal, Santanu; Nogradi, Daniel; Wong, Chik Him

    2015-01-01

    We compute the renormalized running coupling of SU(3) gauge theory coupled to N_f = 2 flavors of massless Dirac fermions in the 2-index-symmetric (sextet) representation. This model is of particular interest as a minimal realization of the strongly interacting composite Higgs scenario. A recently proposed finite volume gradient flow scheme is used. The calculations are performed at several lattice spacings with two different implementations of the gradient flow allowing for a controlled continuum extrapolation and particular attention is paid to estimating the systematic uncertainties. For small values of the renormalized coupling our results for the beta-function agree with perturbation theory. For moderate couplings we observe a downward deviation relative to the 2-loop beta-function but in the coupling range where the continuum extrapolation is fully under control we do not observe an infrared fixed point. The explored range includes the locations of the zero of the 3-loop and the 4-loop beta-functions in ...

  13. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  14. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  15. Probabilistic landslide run-out assessment with a 2-D dynamic numerical model using a Monte Carlo method

    Science.gov (United States)

    Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh

    2013-04-01

    An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters

  16. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available This report outlines progress with the development of computer based dynamic simulation models for ecosystems in the fynbos biome. The models are planned to run on a portable desktop computer with 500 kbytes of memory, extended BASIC language...

  17. Modelling of Muscle Force Distributions During Barefoot and Shod Running

    Directory of Open Access Journals (Sweden)

    Sinclair Jonathan

    2015-09-01

    Full Text Available Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%. Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman’s ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  18. Modelling of Muscle Force Distributions During Barefoot and Shod Running.

    Science.gov (United States)

    Sinclair, Jonathan; Atkins, Stephen; Richards, Jim; Vincent, Hayley

    2015-09-29

    Research interest in barefoot running has expanded considerably in recent years, based around the notion that running without shoes is associated with a reduced incidence of chronic injuries. The aim of the current investigation was to examine the differences in the forces produced by different skeletal muscles during barefoot and shod running. Fifteen male participants ran at 4.0 m·s-1 (± 5%). Kinematics were measured using an eight camera motion analysis system alongside ground reaction force parameters. Differences in sagittal plane kinematics and muscle forces between footwear conditions were examined using repeated measures or Freidman's ANOVA. The kinematic analysis showed that the shod condition was associated with significantly more hip flexion, whilst barefoot running was linked with significantly more flexion at the knee and plantarflexion at the ankle. The examination of muscle kinetics indicated that peak forces from Rectus femoris, Vastus medialis, Vastus lateralis, Tibialis anterior were significantly larger in the shod condition whereas Gastrocnemius forces were significantly larger during barefoot running. These observations provide further insight into the mechanical alterations that runners make when running without shoes. Such findings may also deliver important information to runners regarding their susceptibility to chronic injuries in different footwear conditions.

  19. An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook.

    Science.gov (United States)

    Stevens, Jean-Luc R; Elver, Marco; Bednar, James A

    2013-01-01

    Lancet is a new, simulator-independent Python utility for succinctly specifying, launching, and collating results from large batches of interrelated computationally demanding program runs. This paper demonstrates how to combine Lancet with IPython Notebook to provide a flexible, lightweight, and agile workflow for fully reproducible scientific research. This informal and pragmatic approach uses IPython Notebook to capture the steps in a scientific computation as it is gradually automated and made ready for publication, without mandating the use of any separate application that can constrain scientific exploration and innovation. The resulting notebook concisely records each step involved in even very complex computational processes that led to a particular figure or numerical result, allowing the complete chain of events to be replicated automatically. Lancet was originally designed to help solve problems in computational neuroscience, such as analyzing the sensitivity of a complex simulation to various parameters, or collecting the results from multiple runs with different random starting points. However, because it is never possible to know in advance what tools might be required in future tasks, Lancet has been designed to be completely general, supporting any type of program as long as it can be launched as a process and can return output in the form of files. For instance, Lancet is also heavily used by one of the authors in a separate research group for launching batches of microprocessor simulations. This general design will allow Lancet to continue supporting a given research project even as the underlying approaches and tools change.

  20. Dynamical system approach to running Λ cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Stachowski, Aleksander [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Szydlowski, Marek [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Jagiellonian University, Mark Kac Complex Systems Research Centre, Krakow (Poland)

    2016-11-15

    We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g{sup αβ}∇{sub α}∇{sub β}φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)

  1. A two-runners model: optimization of running strategies according to the physiological parameters

    CERN Document Server

    Aftalion, Amandine

    2015-01-01

    In order to describe the velocity and the anaerobic energy of two runners competing against each other for middle-distance races, we present a mathematical model relying on an optimal control problem for a system of ordinary differential equations. The model is based on energy conservation and on Newton's second law: resistive forces, propulsive forces and variations in the maximal oxygen uptake are taken into account. The interaction between the runners provides a minimum for staying one meter behind one's competitor. We perform numerical simulations and show how a runner can win a race against someone stronger by taking advantage of staying behind, or how he can improve his personal record by running behind someone else. Our simulations show when it is the best time to overtake, depending on the difference between the athletes. Finally, we compare our numerical results with real data from the men's 1500 -- m finals of different competitions.

  2. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  3. Theory Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Shlachter, Jack [Los Alamos National Laboratory

    2012-08-23

    Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.

  4. HectoMAP and Horizon Run 4: Dense Structures and Voids in the Real and Simulated Universe

    CERN Document Server

    Hwang, Ho Seong; Park, Changbom; Fabricant, Daniel G; Kurtz, Michael J; Rines, Kenneth J; Kim, Juhan; Diaferio, Antonaldo; Zahid, H Jabran; Berlind, Perry; Calkins, Michael; Tokarz, Susan; Moran, Sean

    2016-01-01

    HectoMAP is a dense redshift survey of red galaxies covering a 53 $deg^{2}$ strip of the northern sky. HectoMAP is 97\\% complete for galaxies with $r1.0$, and $(r-i)>0.5$. The survey enables tests of the physical properties of large-scale structure at intermediate redshift against cosmological models. We use the Horizon Run 4, one of the densest and largest cosmological simulations based on the standard $\\Lambda$ Cold Dark Matter ($\\Lambda$CDM) model, to compare the physical properties of observed large-scale structures with simulated ones in a volume-limited sample covering 8$\\times10^6$ $h^{-3}$ Mpc$^3$ in the redshift range $0.22simulations to identify over- and under-dense large-scale features of the galaxy distribution. The richness and size distributions of observed over-dense structures agree well with the simulated ones. Observations and simulations also agree for the volume and size distributions of under-dense structures, voids. The ...

  5. AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.

    Science.gov (United States)

    Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie

    2015-04-01

    Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.

  6. Simulation modeling of carcinogenesis.

    Science.gov (United States)

    Ellwein, L B; Cohen, S M

    1992-03-01

    A discrete-time simulation model of carcinogenesis is described mathematically using recursive relationships between time-varying model variables. The dynamics of cellular behavior is represented within a biological framework that encompasses two irreversible and heritable genetic changes. Empirical data and biological supposition dealing with both control and experimental animal groups are used together to establish values for model input variables. The estimation of these variables is integral to the simulation process as described in step-by-step detail. Hepatocarcinogenesis in male F344 rats provides the basis for seven modeling scenarios which illustrate the complexity of relationships among cell proliferation, genotoxicity, and tumor risk.

  7. An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook

    Directory of Open Access Journals (Sweden)

    Jean-Luc Richard Stevens

    2013-12-01

    Full Text Available Lancet is a new, simulator-independent Python utility for succinctlyspecifying, launching, and collating results from large batches ofinterrelated computationally demanding program runs. This paperdemonstrates how to combine Lancet with IPython Notebook to provide aflexible, lightweight, and agile workflow for fully reproduciblescientific research. This informal and pragmatic approach usesIPython Notebook to capture the steps in a scientific computation asit is gradually automated and made ready for publication, withoutmandating the use of any separate application that can constrainscientific exploration and innovation. The resulting notebookconcisely records each step involved in even very complexcomputational processes that led to a particular figure or numericalresult, allowing the complete chain of events to be replicatedautomatically.Lancet was originally designed to help solve problems in computationalneuroscience, such as analyzing the sensitivity of a complexsimulation to various parameters, or collecting the results frommultiple runs with different random starting points. However, becauseit is never possible to know in advance what tools might be requiredin future tasks, Lancet has been designed to be completely general,supporting any type of program as long as it can be launched as aprocess and can return output in the form of files. For instance,Lancet is also heavily used by one of the authors in a separateresearch group for launching batches of microprocessor simulations.This general design will allow Lancet to continue supporting a givenresearch project even as the underlying approaches and tools change.

  8. Kinetic study of run-away burn in ICF capsule using a quasi-1D model

    Science.gov (United States)

    Huang, Chengkun; Molvig, K.; Albright, B. J.; Dodd, E. S.; Hoffman, N. M.; Vold, E. L.; Kagan, G.

    2016-10-01

    The effect of reduced fusion reactivity resulting from the loss of fuel ions in the Gamow peak in the ignition, run-away burn and disassembly stages of an inertial confinement fusion D-T capsule is investigated with a quasi-1D hybrid model that includes kinetic ions, fluid electrons and Planckian radiation photons. The fuel ion loss through the Knudsen effect at the fuel-pusher interface is accounted for by a local-loss model developed in Molvig et al.. The tail refilling and relaxation of the fuel ion distribution are evolved with a nonlinear Fokker-Planck solver. The Krokhin & Rozanov model is used for the finite alpha range beyond the fuel region, while alpha heating to the fuel ions and the fluid electrons is modeled kinetically. For an energetic pusher (40kJ), the simulation shows that the reduced fusion reactivity can lead to substantially lower ion temperature during run-away burn, while the final yield decreases more modestly. Possible improvements to the present model, including the non-Planckian radiation emission and alpha-driven fuel disassembly, are discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Work supported by the ASC TBI project at LANL.

  9. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    Science.gov (United States)

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  10. Predictive modelling of running and dwell times in railway traffic

    NARCIS (Netherlands)

    Kecman, P.; Goverde, R.M.P.

    2015-01-01

    Accurate estimation of running and dwell times is important for all levels of planning and control of railway traffic. The availability of historical track occupation data with a high degree of granularity inspired a data-driven approach for estimating these process times. In this paper we present

  11. Modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Casetti, E.; Vogt, W.G.; Mickle, M.H.

    1984-01-01

    This conference includes papers on the uses of supercomputers, multiprocessors, artificial intelligence and expert systems in various energy applications. Topics considered include knowledge-based expert systems for power engineering, a solar air conditioning laboratory computer system, multivariable control systems, the impact of power system disturbances on computer systems, simulating shared-memory parallel computers, real-time image processing with multiprocessors, and network modeling and simulation of greenhouse solar systems.

  12. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...... of models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....... of models with regards to their purpose, character, field of application and time dimension inherently calls for a similar diversity in validation approaches. A classification of models in terms of the mentioned elements is presented and used to shed light on possible types of validation leading...

  13. Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis

    NARCIS (Netherlands)

    Tervonen, Tommi; van Valkenhoef, Gert; Basturk, Nalan; Postmus, Douwe

    2013-01-01

    Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the wei

  14. Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis

    NARCIS (Netherlands)

    Tervonen, Tommi; van Valkenhoef, Gert; Basturk, Nalan; Postmus, Douwe

    2013-01-01

    Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the wei

  15. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  16. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  17. Price Dispersion and Short Run Equilibrium in a Queuing Model

    OpenAIRE

    Michael Sattinger

    2003-01-01

    Price dispersion is analyzed in the context of a queuing market where customers enter queues to acquire a good or service and may experience delays. With menu costs, price dispersion arises and can persist in the medium and long run. The queuing market rations goods in the same way whether firm prices are optimal or not. Price dispersion reduces the rate at which customers get the good and reduces customer welfare.

  18. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  19. 基于GIS和Voellmy模型的泥石流运动堆积过程模拟%Debris Flow Run-out Process Simulation Based on the GIS Technology and Voellmy Model

    Institute of Scientific and Technical Information of China (English)

    陈伟

    2014-01-01

    Debris lfow is characterized with rapid speed and strong destructive force, furthermore, it has posed a great threat to the development and construction of mountain towns and people life as well as property safety. At present, to analyze the debris run-out has been a hotpot and dififculty in the research of debris lfow. Based on the GIS technology and the proximate Voellmy solver, and taking a typical debris lfow gulley as example, this paper takes advantage of the numerical stimulation method to carry on the analysis for the whole process of debris lfow movement and deposit, including max velocity, pressure, momentum, flow fleight and deposit area, in order to support the hazard assess and prevention of debris lfow.%泥石流流体具有速度快、破坏力强的特征,严重威胁着山区城镇的发展建设、群众生命财产安全。如何针对泥石流运动堆积过程进行模拟一直是泥石流研究的热点和难点。本文基于GIS技术以及近似Voellmy解的连续介质理论,建立了泥石流动力模型,采用数值模拟仿真的方法,以一典型泥石流沟为研究实例进行分析,通过模拟泥石流运动、堆积的动力过程,分析其最大速度、压力、过流量、堆积高度以及堆积面积,为后期泥石流危险性评价区划及防治提供有力支撑。

  20. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    Science.gov (United States)

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  1. DWPF FLOWSHEET STUDIES WITH SIMULANTS TO DETERMINE MCU SOLVENT BUILD-UP IN CONTINOUS RUNS

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, D; Frances Williams, F; S Crump, S; Russell Eibling, R; Thomas02 White, T; David Best, D

    2006-05-25

    quantify the organic distribution in the CPC vessels. The earlier rounds of testing used a Sludge Batch 4 (SB4) simulant since it was anticipated that both of these facilities would begin salt processing during SB4 processing. The same sludge simulant recipe was used in this round of MCU testing to minimize the number of changes between the three phases of testing so a better comparison could be made. The MCU stream simulant was fabricated to perform the testing. The MCU stream represented the ''Maximum Volume'' case from the material balances provided by Campbell. ARP addition was not performed during this set of runs since the ARP evaluation had been completed in earlier runs. The MCU stream was added at boiling during the normal reflux phase of the SRAT cycle. SRAT cycle completion corresponded to the end of MCU stream addition. A total of ten 4-liter SRAT runs were performed to meet the objectives of the testing. The first series of five tests evaluated the organic portioning and mass balance for the addition of 50 mg/kg solvent. The second series of five tests evaluated the organic portioning and mass balance for the addition of 125 mg/kg solvent. A solvent concentration of 50 mg/kg is close to the nominal concentration anticipated in the effluent from the Salt Waste Processing Facility (SWPF). The organic solvent used in the testing was fabricated by the Chemical Science & Technology section. BOBCalixC6 was not added to this solvent due to the high cost and limited availability. All runs targeted 150% acid stoichiometry and 1% Hg in the sludge slurry dried solids.

  2. Numerical simulation of the pollution formed by exhaust jets at the ground running procedure

    Science.gov (United States)

    Korotaeva, T. A.; Turchinovich, A. O.

    2016-10-01

    The paper presents an approach that is new for aviation-related ecology. The approach allows defining spatial distribution of pollutant concentrations released at engine ground running procedure (GRP) using full gas-dynamic models. For the first time such a task is modeled in three-dimensional approximation in the framework of the numerical solution of the Navier-Stokes equations with taking into account a kinetic model of interaction between the components of engine exhaust and air. The complex pattern of gas-dynamic flow that occurs at the flow around an aircraft with the jet exhausts that interact with each other, air, jet blast deflector (JBD), and surface of the airplane has been studied in the present work. The numerical technique developed for calculating the concentrations of pollutants produced at the GRP stage permits to define level, character, and area of contamination more reliable and increase accuracy in definition of sanitary protection zones.

  3. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.

  4. Improving NPP availability using thermalhydraulic integral plant models. Assessment and application of turbine run back scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Reventos, F. [ANACNV, l' Hospitalet de l' Infant, Tarragona (Spain)]|[Technical University of Catalonia, UPC (Spain); Llopis, C.; Pretel, C. [Technical University of Catalonia, UPC (Spain); Posada, J.M.; Moreno, P. [Pablo Moreno S.A. (Spain)

    2001-07-01

    ANAV is the utility responsible of Asco and Vandellos Nuclear Power Plants, a two-unit and a single unit 1000 MW PWR plant, respectively. Both plants, Asco and Vandellos, are in normal operation since 1983 and 1987 and have undergone different important improvements like: steam generators and turbine substitution, power up-rating... Best estimate simulation by means of the thermal-hydraulic integral models of operating nuclear power plants are today impressively helpful for utilities in their purpose of improving availability and keeping safety level. ANAV is currently using Relap5/mod3.2 models of both plants for different purposes related to safety, operation, engineering and training. Turbine run-back system is designed to avoid reactor trips, and it does so in the existing plants, when the key parameters are correctly adjusted. The fine adjustment of such parameters was traditionally performed following the results of control simulators. Such simulators used a fully developed set of control equations and a quite simplified thermal-hydraulic feed-back. Boundary scenarios were considered in order to overcome the difficulties generated by simplification. (author)

  5. Non-linear structure formation in the `Running FLRW' cosmological model

    Science.gov (United States)

    Bibiano, Antonio; Croton, Darren J.

    2016-07-01

    We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.

  6. Active site modeling in copper azurin molecular dynamics simulations

    NARCIS (Netherlands)

    Rizzuti, B; Swart, M; Sportelli, L; Guzzi, R

    2004-01-01

    Active site modeling in molecular dynamics simulations is investigated for the reduced state of copper azurin. Five simulation runs (5 ns each) were performed at room temperature to study the consequences of a mixed electrostatic/constrained modeling for the coordination between the metal and the po

  7. File Specification for the 7-km GEOS-5 Nature Run, Ganymed Release Non-Hydrostatic 7-km Global Mesoscale Simulation

    Science.gov (United States)

    da Silva, Arlindo M.; Putman, William; Nattala, J.

    2014-01-01

    details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.

  8. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  9. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  10. Non-linear structure formation in the "Running FLRW" cosmological model

    CERN Document Server

    Bibiano, Antonio

    2016-01-01

    We present a suite of cosmological N-body simulations describing the "Running Friedmann-Lema{\\"i}tre-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends {\\Lambda}CDM with a time-evolving vacuum density, {\\Lambda}(z), and time-evolving gravitational Newton's coupling, G(z). In this paper we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realisation of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalisation choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the...

  11. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    Science.gov (United States)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  12. Modelling of flexi-coil springs with rubber-metal pads in a locomotive running gear

    Directory of Open Access Journals (Sweden)

    Michálek T.

    2015-06-01

    Full Text Available Nowadays, flexi-coil springs are commonly used in the secondary suspension stage of railway vehicles. Lateral stiffness of these springs is influenced by means of their design parameters (number of coils, height, mean diameter of coils, wire diameter etc. and it is often suitable to modify this stiffness in such way, that the suspension shows various lateral stiffness in different directions (i.e., longitudinally vs. laterally in the vehicle-related coordinate system. Therefore, these springs are often supplemented with some kind of rubber-metal pads. This paper deals with modelling of the flexi-coil springs supplemented with tilting rubber-metal tilting pads applied in running gear of an electric locomotive as well as with consequences of application of that solution of the secondary suspension from the point of view of the vehicle running performance. This analysis is performed by means of multi-body simulations and the description of lateral stiffness characteristics of the springs is based on results of experimental measurements of these characteristics performed in heavy laboratories of the Jan Perner Transport Faculty of the University of Pardubice.

  13. Long-run growth rate in a random multiplicative model

    Energy Technology Data Exchange (ETDEWEB)

    Pirjol, Dan [Institute for Physics and Nuclear Engineering, 077125 Bucharest (Romania)

    2014-08-01

    We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas in thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.

  14. Design and simulation of a control system for a run-off-river power plant; Entwurf und Simulation einer Staustufenregelung

    Energy Technology Data Exchange (ETDEWEB)

    Ott, C.

    2000-07-01

    In run-off-river plants with low discharge and under head-control, changes of inflow lead to amplified changes of outflow. In this thesis a frequency-domain-based design-procedure is introduced, which allows to add an inflow-dependent signal to the head-controller of conventional combined head- and flow-controllers. This efficiently minimizes the discharge amplification. The non-linearity of the channel-reach is taken into consideration by adapting the settings of the controller to the actual discharge. The development of a time-domain-based program system, taking into account all nonlinearities of a run-off-river-plant, is described. Using different test-functions, the capability of the improved combined head- and flow-control can be demonstrated. In both the time- and the frequency-domain it is shown, that the quality of control is not influenced to a significant extent by the inevitable inaccuracies in the description of the channel-reach and in the measurement of the actual inflow and outflow. (orig.) [German] Die Arbeit bietet eine Loesung fuer das Problem, dass im Niedrigwasserbereich wasserstandsgeregelter Staustufen Zuflussaenderungen durch die Staustufe verstaerkt an den Unterlieger weitergegeben werden. Als Problemloesung wird ein frequenzbereichsgestuetztes Entwurfsverfahren vorgestellt, mit dem die gebraeuchliche OW-Q-Regelung um eine zuflussabhaengige Aufschaltung auf den Pegelregler erweitert werden kann. Zusammen mit der Aufschaltung des Zuflusses auf den Abflussregler wird damit die Durchflussverstaerkung deutlich reduziert. Die Nichtlinearitaet der Regelstrecke 'Stauraum' wird durch eine Parameteradaption an den Staustufendurchfluss beruecksichtigt. Weiterhin wird die Entwicklung eines Programmsystems zur nichtlinearen Simulation einer Staustufenkette im Zeitbereich beschrieben. Damit kann anhand verschiedener Lastfaelle die Leistungsfaehigkeit der verbesserten OW-Q-Regelung nachgewiesen werden. Es wird im Zeit- und Frequenzbereich

  15. Biases in modeled surface snow BC mixing ratios in prescribed-aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    Black carbon (BC) in snow lowers its albedo, increasing the absorption of sunlight, leading to positive radiative forcing, climate warming and earlier snowmelt. A series of recent studies have used prescribed-aerosol deposition flux fields in climate model runs to assess the forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we compare progn...

  16. Modelling Energy Loss Mechanisms and a Determination of the Electron Energy Scale for the CDF Run II W Mass Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Riddick, Thomas [Univ. College London, Bloomsbury (United Kingdom)

    2012-06-15

    The calibration of the calorimeter energy scale is vital to measuring the mass of the W boson at CDF Run II. For the second measurement of the W boson mass at CDF Run II, two independent simulations were developed. This thesis presents a detailed description of the modification and validation of Bremsstrahlung and pair production modelling in one of these simulations, UCL Fast Simulation, comparing to both geant4 and real data where appropriate. The total systematic uncertainty on the measurement of the W boson mass in the W → eve channel from residual inaccuracies in Bremsstrahlung modelling is estimated as 6.2 ±3.2 MeV/c2 and the total systematic uncertainty from residual inaccuracies in pair production modelling is estimated as 2.8± 2.7 MeV=c2. Two independent methods are used to calibrate the calorimeter energy scale in UCL Fast Simulation; the results of these two methods are compared to produce a measurement of the Z boson mass as a cross-check on the accuracy of the simulation.

  17. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.;

    , have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture...... and trailed vorticity, has been approached by a simple semi-empirical model essentially based on an eddy viscosity philosophy. Contrary to previous attempts to model wake loading, the DWM approach opens for a unifying description in the sense that turbine power- and load aspects can be treated simultaneously...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  18. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  19. Renormalisation running of masses and mixings in UED models

    CERN Document Server

    Cornell, A S; Liu, Lu-Xin; Tarhini, Ahmad

    2012-01-01

    We review the Universal Extra-Dimensional Model compactified on a S1/Z2 orbifold, and the renormalisation group evolution of quark and lepton masses, mixing angles and phases both in the UED extension of the Standard Model and of the Minimal Supersymmetric Standard Model. We consider two typical scenarios: all matter fields propagating in the bulk, and matter fields constrained to the brane. The resulting renormalisation group evolution equations in these scenarios are compared with the existing results in the literature, together with their implications.

  20. On-line simulations of models for backward masking.

    Science.gov (United States)

    Francis, Gregory

    2003-11-01

    Five simulations of quantitative models of visual backward masking are available on the Internet at http://www.psych.purdue.edu/-gfrancis/Publications/BackwardMasking/. The simulations can be run in a Web browser that supports the Java programming language. This article describes the motivation for making the simulations available and gives a brief introduction as to how the simulations are used. The source code is available on the Web page, and this article describes how the code is organized.

  1. Modelling effects of acid deposition and climate change on soil and run-off chemistry at Risdalsheia, Norway

    Directory of Open Access Journals (Sweden)

    J. P. Mol-Dijkstra

    2001-01-01

    Full Text Available Elevated carbon dioxide levels, caused by anthropogenic emissions of carbon dioxide to the atmosphere, and higher temperature may lead to increased plant growth and uptake of nitrogen, but increased temperature may lead to increased nitrogen mineralisation causing enhanced nitrogen leaching. The overall result of both counteracting effects is largely unknown. To gain insight into the long-term effects, the geochemical model SMART2 was applied using data from the catchment-scale experiments of the RAIN and CLIMEX projects, conducted on boreal forest ecosystems at Risdalsheia, southern Norway. These unique experiments at the ecosystem scale provide information on the short-term effects and interactions of nitrogen deposition and increased temperature and carbon dioxide on carbon and nitrogen cycling and especially the run-off chemistry. To predict changes in soil processes in response to climate change, the model was extended by including the temperature effect on mineralisation, nitrification, denitrification, aluminium dissolution and mineral weathering. The extended model was tested on the two manipulated catchments at Risdalsheia and long-term effects were evaluated by performing long-time runs. The effects of climate change treatment, which resulted in increased nitrogen fluxes at both catchments, were slightly overestimated by SMART2. The temperature dependency of mineralisation was simulated adequately but the temperature effect on nitrification was slightly overestimated. Monitored changes in base cation concentrations and pH were quite well simulated with SMART2. The long-term simulations indicate that the increase in nitrogen run-off is only a temporary effect; in the long-term, no effect on total nitrogen leaching is predicted. At higher deposition levels the temporary increase in nitrogen leaching lasts longer than at low deposition. Contrary to nitrogen leaching, temperature increase leads to a permanent decrease in aluminium

  2. Delay modeling in logic simulation

    Energy Technology Data Exchange (ETDEWEB)

    Acken, J. M.; Goldstein, L. H.

    1980-01-01

    As digital integrated circuit size and complexity increases, the need for accurate and efficient computer simulation increases. Logic simulators such as SALOGS (SAndia LOGic Simulator), which utilize transition states in addition to the normal stable states, provide more accurate analysis than is possible with traditional logic simulators. Furthermore, the computational complexity of this analysis is far lower than that of circuit simulation such as SPICE. An eight-value logic simulation environment allows the use of accurate delay models that incorporate both element response and transition times. Thus, timing simulation with an accuracy approaching that of circuit simulation can be accomplished with an efficiency comparable to that of logic simulation. 4 figures.

  3. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  4. Coupled models of heat transfer and phase transformation for the run-out table in hot rolling

    Institute of Scientific and Technical Information of China (English)

    Shui-xuan CHEN; Jun ZOU; Xin FU

    2008-01-01

    Mathematical models are been proposed to simulate the thermal and metallurgical behaviors of the strip occurring on the run-out table (ROT) in a hot strip mill. A variational method is utilized for the discretization of the governing transient conduction-convection equation, with heat transfer coefficients adaptively determined by the actual mill data. To consider the thermal effect of phase transformation during cooling, a constitutive equation for describing austenite decomposition kinetics of steel in air and water cooling zones is coupled with the heat transfer model. As the basic required inputs in the numerical simulations, thermal material properties are experimentally measured for three carbon steels and the least squares method is used to statistically derive regression models for the properties, including specific heat and thermal conductivity. The numerical simulation and experimental results show that the setup accuracy of the temperature prediction system of ROT is effectively improved.

  5. Dynamical system approach to running $\\Lambda$ cosmological models

    CERN Document Server

    Stachowski, Aleksander

    2016-01-01

    We discussed the dynamics of cosmological models in which the cosmological constant term is a time dependent function through the scale factor $a(t)$, Hubble function $H(t)$, Ricci scalar $R(t)$ and scalar field $\\phi(t)$. We considered five classes of models; two non-covariant parametrization of $\\Lambda$: 1) $\\Lambda(H)$CDM cosmologies where $H(t)$ is the Hubble parameter, 2) $\\Lambda(a)$CDM cosmologies where $a(t)$ is the scale factor, and three covariant parametrization of $\\Lambda$: 3) $\\Lambda(R)$CDM cosmologies, where $R(t)$ is the Ricci scalar, 4) $\\Lambda(\\phi)$-cosmologies with diffusion, 5) $\\Lambda(X)$-cosmologies, where $X=\\frac{1}{2}g^{\\alpha\\beta}\

  6. Equator To Pole in the Cretaceous: A Comparison of Clumped Isotope Data and CESM Model Runs

    Science.gov (United States)

    Petersen, S. V.; Tabor, C. R.; Meyer, K.; Lohmann, K. C.; Poulsen, C. J.; Carpenter, S. J.

    2015-12-01

    An outstanding issue in the field of paleoclimate is the inability of models to reproduce the shallower equator-to-pole temperature gradients suggested by proxies for past greenhouse periods. Here, we focus on the Late Cretaceous (Maastrichtian, 72-66 Ma), when estimated CO2 levels were ~400-1000ppm. New clumped isotope temperature data from more than 10 sites spanning 65°S to 48°N are used to reconstruct the Maastrichtian equator-to-pole temperature gradient. This data is compared to CESM model simulations of the Maastrichtian, run using relevant paleogeography and atmospheric CO2 levels of 560 and 1120 ppm. Due to a reduced "proxy toolkit" this far in the past, much of our knowledge of Cretaceous climate comes from the oxygen isotope paleothermometer, which incorporates an assumption about the oxygen isotopic composition of seawater (δ18Osw), a quantity often related to salinity. With the clumped isotope paleothermometer, we can directly calculate δ18Osw. This will be used to test commonly applied assumptions about water composition, and will be compared to modeled ocean salinity. We also discuss basin-to-basin differences and their implications for paleo-circulation patterns.

  7. Implementation of the ATLAS Run 2 event data model

    CERN Document Server

    Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter

    2015-01-01

    During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...

  8. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  9. mr: a C++ library for the matching and running of the Standard Model parameters

    CERN Document Server

    Kniehl, Bernd A; Veretin, Oleg L

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the $\\overline{\\mathrm{MS}}$ renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  10. CASY: a dynamic simulation of the gas-cooled fast breeder reactor core auxiliary cooling system. Volume II. Example computer run

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    A listing of a CASY computer run is presented. It was initiated from a demand terminal and, therefore, contains the identification ST0952. This run also contains an INDEX listing of the subroutine UPDATE. The run includes a simulated scram transient at 30 seconds.

  11. Preparations, models, and simulations.

    Science.gov (United States)

    Rheinberger, Hans-Jörg

    2015-01-01

    This paper proposes an outline for a typology of the different forms that scientific objects can take in the life sciences. The first section discusses preparations (or specimens)--a form of scientific object that accompanied the development of modern biology in different guises from the seventeenth century to the present: as anatomical-morphological specimens, as microscopic cuts, and as biochemical preparations. In the second section, the characteristics of models in biology are discussed. They became prominent from the end of the nineteenth century onwards. Some remarks on the role of simulations--characterising the life sciences of the turn from the twentieth to the twenty-first century--conclude the paper.

  12. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  13. Model based control for run-of-river system. Part 2: Comparison of control structures

    Directory of Open Access Journals (Sweden)

    Liubomyr Vytvytskyi

    2015-10-01

    Full Text Available Optimal operation and control of a run-of-river hydro power plant depend on good knowledge of the elements of the plant in the form of models. Both the control architecture of the system, i.e. the choice of inputs and outputs, and to what degree a model is used, will affect the achievable control performance. Here, a model of a river reach based on the Saint Venant equations for open channel flow illustrates the dynamics of the run-of-river system. The hyperbolic partial differential equations are discretized using the Kurganov-Petrova central upwind scheme - see Part I for details. A comparison is given of achievable control performance using two alternative control signals: the inlet or the outlet volumetric flow rates to the system, in combination with a number of different control structures such as PI control, PI control with Smith predictor, and predictive control. The control objective is to keep the level just in front of the dam as high as possible, and with little variation in the level to avoid overflow over the dam. With a step change in the volumetric inflow to the river reach (disturbance and using the volumetric outflow as the control signal, PI control gives quite good performance. Model predictive control (MPC gives superior control in the sense of constraining the variation in the water level, at a cost of longer computational time and thus constraints on possible sample time. Details on controller tuning are given. With volumetric inflow to the river reach as control signal and outflow (production as disturbance, this introduces a considerable time delay in the control signal. Because of nonlinearity in the system (varying time delay, etc., it is difficult to achieve stable closed loop performance using a simple PI controller. However, by combining a PI controller with a Smith predictor based on a simple integrator + fixed time delay model, stable closed loop operation is possible with decent control performance. Still, an MPC

  14. Searching For Exotic Physics Beyond the Standard Model: Extrapolation Until the End of Run-3

    CERN Document Server

    Genest, Marie-Hel\\`ene; The ATLAS collaboration

    2017-01-01

    The prospects of looking for exotic beyond-the-Standard-Model physics with the ATLAS and CMS detectors at the LHC in the rest of Run-2 and in Run-3 will be reviewed. A few selected analyses will be discussed, showing the gain in sensitivity that can be achieved by accumulating more data and comparing the current limits with the predicted reach. Some limiting factors will be identified, along with ideas on how to improve on the searches.

  15. Biases in modeled surface snow BC mixing ratios in prescribed aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    A series of recent studies have used prescribed aerosol deposition flux fields in climate model runs to assess forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we use a series of offline calculations to show that this approach results, on average, in a~factor of about 1.5–2.5 high bias in annual-mean surface snow BC mixing ratios in three ...

  16. Modeling driver stop/run behavior at the onset of a yellow indication considering driver run tendency and roadway surface conditions.

    Science.gov (United States)

    Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab

    2015-10-01

    The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only.

  17. Towards a numerical run-out model for quick-clay slides

    Science.gov (United States)

    Issler, Dieter; L'Heureux, Jean-Sébastien; Cepeda, José M.; Luna, Byron Quan; Gebreslassie, Tesfahunegn A.

    2015-04-01

    quasi-three-dimensional codes with a choice of bed-friction laws. The findings of the simulations point strongly towards the need for a different modeling approach that incorporates the essential physical features of quick-clay slides. The major requirement is a realistic description of remolding. A two-layer model is needed to describe the non-sensitive topsoil that often is passively advected by the slide. In many cases, the topography is rather complex so that 3D or quasi-3D (depth-averaged) models are required for realistic modeling of flow heights and velocities. Finally, since many Norwegian quick-clay slides run-out in a fjord (and may generate a tsunami), it is also desirable to explicitly account for buoyancy and hydrodynamic drag.

  18. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  19. Carbohydrate and caffeine improves high intensity running of elite rugby league interchange players during simulated match play.

    Science.gov (United States)

    Clarke, Jon S; Highton, Jamie; Close, Graeme L; Twist, Craig

    2016-11-19

    The study examined the effects of carbohydrate and caffeine ingestion on simulated rugby league interchange performance. Eight male elite rugby league forwards completed two trials of a rugby league simulation protocol for interchange players seven days apart in a randomized crossover design, ingesting either carbohydrate (CHO; 40 g·h-1) or carbohydrate and caffeine (CHO-C) (40 g·h-1 + 3 mg·kg-1) drink. Movement characteristics, heart rate, ratings of perceived exertion (RPE), and countermovement jump height (CMJ) were measured during the protocol. CHO-C resulted in likely to very likely higher mean running speeds (ES 0.43 to 0.75), distance in high intensity running (ES 0.41 to 0.64) and mean sprint speeds (ES 0.39 to 1.04) compared to CHO. Heart rate was possibly to very likely higher (ES 0.32 to 0.74) and RPE was likely to very likely lower (ES -0.53 to 0.86) with CHO-C. There was a likely trivial to possibly higher CMJ in CHO-C compared to CHO (ES 0.07 to 0.25). The co-ingestion of carbohydrate with caffeine has an ergogenic effect to reduce the sense of effort and increase high intensity running capability that might be employed to enhance interchange running performance in elite rugby league players.

  20. Higher-order effects in asset-pricing models with long-run risks

    NARCIS (Netherlands)

    Pohl, W.; Schmedders, K.; Wilms, Ole

    2017-01-01

    This paper shows that the latest generation of asset pricing models with long-run risk exhibits economically significant nonlinearities, and thus the ubiquitous Campbell--Shiller log-linearization can generate large numerical errors. These errors in turn translate to considerable errors in the model

  1. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  2. Running Effects on Lepton Mixing Angles in Flavour Models with Type I Seesaw

    CERN Document Server

    Lin, Y; Paris, A

    2009-01-01

    We study renormalization group running effects on neutrino mixing patterns when a (type I) seesaw model is implemented by suitable flavour symmetries. We are particularly interested in mass-independent mixing patterns to which the widely studied tribimaximal mixing pattern belongs. In this class of flavour models, the running contribution from neutrino Yukawa coupling, which is generally dominant at energies above the seesaw threshold, can be absorbed by a small shift on neutrino mass eigenvalues leaving mixing angles unchanged. Consequently, in the whole running energy range, the change in mixing angles is due to the contribution coming from charged lepton sector. Subsequently, we analyze in detail these effects in an explicit flavour model for tribimaximal neutrino mixing based on an A4 discrete symmetry group. We find that for normally ordered light neutrinos, the tribimaximal prediction is essentially stable under renormalization group evolution. On the other hand, in the case of inverted hierarchy, the d...

  3. Physical simulation of resonant wave run-up on a beach

    CERN Document Server

    Ezersky, Alexander; Pelinovsky, Efim

    2012-01-01

    Nonlinear wave run-up on the beach caused by harmonic wave maker located at some distance from the shore line is studied experimentally. It is revealed that under certain wave excitation frequencies a significant increase in run-up amplification is observed. It is found that this amplification is due to the excitation of resonant mode in the region between the shoreline and wave maker. Frequency and magnitude of the maximum amplification are in good correlation with the numerical calculation results represented in the paper (T.S. Stefanakis et al. PRL (2011)). These effects are very important for understanding the nature of rougue waves in the coastle zone.

  4. Models of production runs for multiple products in flexible manufacturing system

    Directory of Open Access Journals (Sweden)

    Ilić Oliver

    2011-01-01

    Full Text Available How to determine economic production runs (EPR for multiple products in flexible manufacturing systems (FMS is considered in this paper. Eight different although similar, models are developed and presented. The first four models are devoted to the cases when no shortage is allowed. The other four models are some kind of generalization of the previous ones when shortages may exist. The numerical examples are given as the illustration of the proposed models.

  5. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  6. Parallelization and Performance of the NIM Weather Model Running on GPUs

    Science.gov (United States)

    Govett, Mark; Middlecoff, Jacques; Henderson, Tom; Rosinski, James

    2014-05-01

    The Non-hydrostatic Icosahedral Model (NIM) is a global weather prediction model being developed to run on the GPU and MIC fine-grain architectures. The model dynamics, written in Fortran, was initially parallelized for GPUs in 2009 using the F2C-ACC compiler and demonstrated good results running on a single GPU. Subsequent efforts have focused on (1) running efficiently on multiple GPUs, (2) parallelization of NIM for Intel-MIC using openMP, (3) assessing commercial Fortran GPU compilers now available from Cray, PGI and CAPS, (4) keeping the model up to date with the latest scientific development while maintaining a single source performance portable code, and (5) parallelization of two physics packages used in the NIM: the operational Global Forecast System (GFS) used operationally, and the widely used Weather Research and Forecast (WRF) model physics. The presentation will touch on each of these efforts, but highlight improvements in parallel performance of the NIM running on the Titan GPU cluster at ORNL, the ongong parallelization of model physics, and a recent evaluation of commercial GPU compilers using the F2C-ACC compiler as the baseline.

  7. Evaluation of perceived motion during a simulated take-off run

    NARCIS (Netherlands)

    Groen, E.L.; Clari, M.S.V. Valenti; Hosman, R.J.A.W.

    2001-01-01

    In de Nationale Simulator Faciliteit (NSF) werd bij een ervaren verkeersvlieger bepaald welke simulator bewegingen veriest zijn voor een realistische sensatie van lineaire versnelling tijdens een gesimuleerde take-off.

  8. Search for the standard model Higgs boson produced in vector boson fusion and decaying to bottom quarks using the Run1 and 2015 Run2 data samples.

    CERN Document Server

    Chernyavskaya, Nadezda

    2016-01-01

    A search for the standard model Higgs boson is presented in the Vector Boson Fusion production channel with decay to bottom quarks. A data sample comprising 2.2 fb$^-1$ of proton-proton collision at $\\sqrt{s}$ = 13 TeV collected during the 2015 running period has been analyzed. Production upper limits at 95\\% Confidence Level are derived for a Higgs boson mass of 125 GeV, as well as the fitted signal strength relative to the expectation for the standard model Higgs boson. Results are also combined with the ones obtained with Run1 sqrt(s) = 8 TeV data collected in 2012.

  9. Evaluating uncertainty in simulation models

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M.D.; Beckman, R.J.; Morrison, J.D.; Upton, S.C.

    1998-12-01

    The authors discussed some directions for research and development of methods for assessing simulation variability, input uncertainty, and structural model uncertainty. Variance-based measures of importance for input and simulation variables arise naturally when using the quadratic loss function of the difference between the full model prediction y and the restricted prediction {tilde y}. The concluded that generic methods for assessing structural model uncertainty do not now exist. However, methods to analyze structural uncertainty for particular classes of models, like discrete event simulation models, may be attainable.

  10. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operation...... in case of such faults. The design of the controller is described and its performance assessed by simulations. The control strategies are explained and the behaviour of the turbine discussed....

  11. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operation...

  12. An Integrated Approach to Flexible Modelling and Animated Simulation

    Institute of Scientific and Technical Information of China (English)

    Li Shuliang; Wu Zhenye

    1994-01-01

    Based on the software support of SIMAN/CINEMA, this paper presents an integrated approach to flexible modelling and simulation with animation. The methodology provides a structured way of integrating mathematical and logical model, statistical experinentation, and statistical analysis with computer animation. Within this methodology, an animated simulation study is separated into six different activities: simulation objectives identification , system model development, simulation experiment specification, animation layout construction, real-time simulation and animation run, and output data analysis. These six activities are objectives driven, relatively independent, and integrate through software organization and simulation files. The key ideas behind this methodology are objectives orientation, modelling flexibility,simulation and animation integration, and application tailorability. Though the methodology is closely related to SIMAN/CINEMA, it can be extended to other software environments.

  13. Stochastic Characteristics and Simulation of the Random Waypoint Mobility Model

    CERN Document Server

    Ahuja, A; Krishna, P Venkata

    2012-01-01

    Simulation results for Mobile Ad-Hoc Networks (MANETs) are fundamentally governed by the underlying Mobility Model. Thus it is imperative to find whether events functionally dependent on the mobility model 'converge' to well defined functions or constants. This shall ensure the long-run consistency among simulation performed by disparate parties. This paper reviews a work on the discrete Random Waypoint Mobility Model (RWMM), addressing its long run stochastic stability. It is proved that each model in the targeted discrete class of the RWMM satisfies Birkhoff's pointwise ergodic theorem [13], and hence time averaged functions on the mobility model surely converge. We also simulate the most common and general version of the RWMM to give insight into its working.

  14. Strong Lensing Probabilities in a Cosmological Model with a Running Primordial Power Spectrum

    CERN Document Server

    Zhang, T J; Yang, Z L; He, X T; Zhang, Tong-Jie; Chen, Da-Ming; Yang, Zhi-Liang; He, Xiang-Tao

    2004-01-01

    The combination of the first-year Wilkinson Microwave Anisotropy Probe (WMAP) data with other finer scale cosmic microwave background (CMB) experiments (CBI and ACBAR) and two structure formation measurements (2dFGRS and Lyman $\\alpha$ forest) suggest a $\\Lambda$CDM cosmological model with a running spectral power index of primordial density fluctuations. Motivated by this new result on the index of primordial power spectrum, we present the first study on the predicted lensing probabilities of image separation in a spatially flat $\\Lambda$CDM model with a running spectral index (RSI-$\\Lambda$CDM model). It is shown that the RSI-$\\Lambda$CDM model suppress the predicted lensing probabilities on small splitting angles of less than about 4$^{''}$ compared with that of standard power-law $\\Lambda$CDM (PL-$\\Lambda$CDM) model.

  15. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  16. Running the running

    CERN Document Server

    Cabass, Giovanni; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\pm0.013$ at $68\\%\\,\\mathrm{CL}$, suggesting the presence of a running of the running at the level of two standard deviations. We find no significant correlation between $\\beta_{\\mathrm{s}}$ and foregrounds parameters, with the exception of the point sources amplitude at $143\\,\\mathrm{GHz}$, $A^{PS}_{143}$, which shifts by half sigma when the running of the running is considered. We further study the cosmological implications of this anomaly by including in the analysis the lensing amplitude $A_L$, the curvature parameter ...

  17. The Run up Tsunami Modeling in Bengkulu using the Spatial Interpolation of Kriging Technique

    Directory of Open Access Journals (Sweden)

    Yulian Fauzi

    2014-12-01

    Full Text Available This research aims to design a tsunami hazard zone with the scenario of tsunami run-up height variation based on land use, slope and distance from the shoreline. The method used in this research is spatial modelling with GIS via Ordinary Kriging interpolation technique. Kriging interpolation method that is the best in this study is shown by Circular Kriging method with good semivariogram and RMSE values which are small compared to other RMSE kriging methods. The results shows that the area affected by the tsunami inundation run-up height, slope and land use. In the run-up to 30 meters, flooded areas are about 3,148.99 hectares or 20.7% of the total area of the city of Bengkulu.

  18. Usability and Information access challenges in complex simulation models

    CSIR Research Space (South Africa)

    Naidoo, S

    2008-07-01

    Full Text Available scenario as a file using an XML format to store the data. Because VGD uses a distributed simulation architecture, the tool also allows the user to specify where each model will run. In this case the user will, in addition to the scenario, also specify... example of this is the terrain configuration files that specify the type of terrain data to be used. This terrain configuration file must be set up in order for the simulation to run and is located in the same folder as the scenario configuration...

  19. IVOA Recommendation: Simulation Data Model

    CERN Document Server

    Lemson, Gerard; Cervino, Miguel; Gheller, Claudio; Gray, Norman; LePetit, Franck; Louys, Mireille; Ooghe, Benjamin; Wagner, Rick; Wozniak, Herve

    2014-01-01

    In this document and the accompanying documents we describe a data model (Simulation Data Model) describing numerical computer simulations of astrophysical systems. The primary goal of this standard is to support discovery of simulations by describing those aspects of them that scientists might wish to query on, i.e. it is a model for meta-data describing simulations. This document does not propose a protocol for using this model. IVOA protocols are being developed and are supposed to use the model, either in its original form or in a form derived from the model proposed here, but more suited to the particular protocol. The SimDM has been developed in the IVOA Theory Interest Group with assistance of representatives of relevant working groups, in particular DM and Semantics.

  20. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  1. Can neuromuscular fatigue explain running strategies and performance in ultra-marathons?: the flush model.

    Science.gov (United States)

    Millet, Guillaume Y

    2011-06-01

    While the industrialized world adopts a largely sedentary lifestyle, ultra-marathon running races have become increasingly popular in the last few years in many countries. The ability to run long distances is also considered to have played a role in human evolution. This makes the issue of ultra-long distance physiology important. In the ability to run multiples of 10 km (up to 1000 km in one stage), fatigue resistance is critical. Fatigue is generally defined as strength loss (i.e. a decrease in maximal voluntary contraction [MVC]), which is known to be dependent on the type of exercise. Critical task variables include the intensity and duration of the activity, both of which are very specific to ultra-endurance sports. They also include the muscle groups involved and the type of muscle contraction, two variables that depend on the sport under consideration. The first part of this article focuses on the central and peripheral causes of the alterations to neuromuscular function that occur in ultra-marathon running. Neuromuscular function evaluation requires measurements of MVCs and maximal electrical/magnetic stimulations; these provide an insight into the factors in the CNS and the muscles implicated in fatigue. However, such measurements do not necessarily predict how muscle function may influence ultra-endurance running and whether this has an effect on speed regulation during a real competition (i.e. when pacing strategies are involved). In other words, the nature of the relationship between fatigue as measured using maximal contractions/stimulation and submaximal performance limitation/regulation is questionable. To investigate this issue, we are suggesting a holistic model in the second part of this article. This model can be applied to all endurance activities, but is specifically adapted to ultra-endurance running: the flush model. This model has the following four components: (i) the ball-cock (or buoy), which can be compared with the rate of perceived

  2. Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations

    Science.gov (United States)

    Sung, Christopher Teh Boon

    2011-01-01

    Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…

  3. Human and avian running on uneven ground: a model-based comparison

    Science.gov (United States)

    Birn-Jeffery, A. V.; Blum, Y.

    2016-01-01

    Birds and humans are successful bipedal runners, who have individually evolved bipedalism, but the extent of the similarities and differences of their bipedal locomotion is unknown. In turn, the anatomical differences of their locomotor systems complicate direct comparisons. However, a simplifying mechanical model, such as the conservative spring–mass model, can be used to describe both avian and human running and thus, provides a way to compare the locomotor strategies that birds and humans use when running on level and uneven ground. Although humans run with significantly steeper leg angles at touchdown and stiffer legs when compared with cursorial ground birds, swing-leg adaptations (leg angle and leg length kinematics) used by birds and humans while running appear similar across all types of uneven ground. Nevertheless, owing to morphological restrictions, the crouched avian leg has a greater range of leg angle and leg length adaptations when coping with drops and downward steps than the straight human leg. On the other hand, the straight human leg seems to use leg stiffness adaptation when coping with obstacles and upward steps unlike the crouched avian leg posture. PMID:27655670

  4. Modeling and Simulation with INS.

    Science.gov (United States)

    Roberts, Stephen D.; And Others

    INS, the Integrated Network Simulation language, puts simulation modeling into a network framework and automatically performs such programming activities as placing the problem into a next event structure, coding events, collecting statistics, monitoring status, and formatting reports. To do this, INS provides a set of symbols (nodes and branches)…

  5. Simulation modeling of estuarine ecosystems

    Science.gov (United States)

    Johnson, R. W.

    1980-01-01

    A simulation model has been developed of Galveston Bay, Texas ecosystem. Secondary productivity measured by harvestable species (such as shrimp and fish) is evaluated in terms of man-related and controllable factors, such as quantity and quality of inlet fresh-water and pollutants. This simulation model used information from an existing physical parameters model as well as pertinent biological measurements obtained by conventional sampling techniques. Predicted results from the model compared favorably with those from comparable investigations. In addition, this paper will discuss remotely sensed and conventional measurements in the framework of prospective models that may be used to study estuarine processes and ecosystem productivity.

  6. Status of the Inert Doublet Model of dark matter after Run-1 of the LHC

    CERN Document Server

    Goudelis, Andreas

    2015-01-01

    The Inert Doublet Model (IDM) is one of the simplest extensions of the Standard Model that can provide a viable dark matter (DM) candidate. Despite its simplicity, it predicts a versatile phenomenology both for cosmology and for the Large Hadron Collider. We briefly summarize the status of searches for IDM dark matter in direct DM detection experiments and the LHC, focusing on the impact of the latter on the model parameter space. In particular, we discuss the consequences of the Higgs boson discovery as well as those of searches for dileptons accompanied by missing transverse energy during the first LHC Run and comment on the prospects of probing some of the hardest to test regions of the IDM parameter space during the 13 TeV Run.

  7. Sludge batch 9 simulant runs using the nitric-glycolic acid flowsheet

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, D. P. [Savannah River Site (SRS), Aiken, SC (United States); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States); Brandenburg, C. H. [Savannah River Site (SRS), Aiken, SC (United States); Luther, M. C. [Savannah River Site (SRS), Aiken, SC (United States); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States); Woodham, W. H. [Savannah River Site (SRS), Aiken, SC (United States)

    2016-11-01

    Testing was completed to develop a Sludge Batch 9 (SB9) nitric-glycolic acid chemical process flowsheet for the Defense Waste Processing Facility’s (DWPF) Chemical Process Cell (CPC). CPC simulations were completed using SB9 sludge simulant, Strip Effluent Feed Tank (SEFT) simulant and Precipitate Reactor Feed Tank (PRFT) simulant. Ten sludge-only Sludge Receipt and Adjustment Tank (SRAT) cycles and four SRAT/Slurry Mix Evaporator (SME) cycles, and one actual SB9 sludge (SRAT/SME cycle) were completed. As has been demonstrated in over 100 simulations, the replacement of formic acid with glycolic acid virtually eliminates the CPC’s largest flammability hazards, hydrogen and ammonia. Recommended processing conditions are summarized in section 3.5.1. Testing demonstrated that the interim chemistry and Reduction/Oxidation (REDOX) equations are sufficient to predict the composition of DWPF SRAT product and SME product. Additional reports will finalize the chemistry and REDOX equations. Additional testing developed an antifoam strategy to minimize the hexamethyldisiloxane (HMDSO) peak at boiling, while controlling foam based on testing with simulant and actual waste. Implementation of the nitric-glycolic acid flowsheet in DWPF is recommended. This flowsheet not only eliminates the hydrogen and ammonia hazards but will lead to shorter processing times, higher elemental mercury recovery, and more concentrated SRAT and SME products. The steady pH profile is expected to provide flexibility in processing the high volume of strip effluent expected once the Salt Waste Processing Facility starts up.

  8. Modeling and Simulating Environmental Effects

    OpenAIRE

    Guest, Peter S.; Murphree, Tom; Frederickson, Paul A.; Guest, Arlene A.

    2012-01-01

    MOVES Research & Education Systems Seminar: Presentation; Session 4: Collaborative NWDC/NPS M&S Research; Moderator: Curtis Blais; Modeling and Simulating Environmental Effects; speakers: Peter Guest, Paul Frederickson & Tom Murphree Environmental Effects Group

  9. TREAT Modeling and Simulation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  10. NASA SPoRT Initialization Datasets for Local Model Runs in the Environmental Modeling System

    Science.gov (United States)

    Case, Jonathan L.; LaFontaine, Frank J.; Molthan, Andrew L.; Carcione, Brian; Wood, Lance; Maloney, Joseph; Estupinan, Jeral; Medlin, Jeffrey M.; Blottman, Peter; Rozumalski, Robert A.

    2011-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its National Weather Service (NWS) partners that can be used to initialize local model runs within the Weather Research and Forecasting (WRF) Environmental Modeling System (EMS). These real-time datasets consist of surface-based information updated at least once per day, and produced in a composite or gridded product that is easily incorporated into the WRF EMS. The primary goal for making these NASA datasets available to the WRF EMS community is to provide timely and high-quality information at a spatial resolution comparable to that used in the local model configurations (i.e., convection-allowing scales). The current suite of SPoRT products supported in the WRF EMS include a Sea Surface Temperature (SST) composite, a Great Lakes sea-ice extent, a Greenness Vegetation Fraction (GVF) composite, and Land Information System (LIS) gridded output. The SPoRT SST composite is a blend of primarily the Moderate Resolution Imaging Spectroradiometer (MODIS) infrared and Advanced Microwave Scanning Radiometer for Earth Observing System data for non-precipitation coverage over the oceans at 2-km resolution. The composite includes a special lake surface temperature analysis over the Great Lakes using contributions from the Remote Sensing Systems temperature data. The Great Lakes Environmental Research Laboratory Ice Percentage product is used to create a sea-ice mask in the SPoRT SST composite. The sea-ice mask is produced daily (in-season) at 1.8-km resolution and identifies ice percentage from 0 100% in 10% increments, with values above 90% flagged as ice.

  11. Repo Runs

    NARCIS (Netherlands)

    Martin, A.; Skeie, D.; von Thadden, E.L.

    2010-01-01

    This paper develops a model of financial institutions that borrow short- term and invest into long-term marketable assets. Because these financial intermediaries perform maturity transformation, they are subject to runs. We endogenize the profits of the intermediary and derive distinct liquidity and

  12. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  13. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  14. Running Away

    Science.gov (United States)

    ... Emergency Room? What Happens in the Operating Room? Running Away KidsHealth > For Kids > Running Away Print A ... life on the streets. continue The Reality of Running Away When you think about running away, you ...

  15. Kriging-approximation simulated annealing algorithm for groundwater modeling

    Science.gov (United States)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  16. Towards Better Coupling of Hydrological Simulation Models

    Science.gov (United States)

    Penton, D.; Stenson, M.; Leighton, B.; Bridgart, R.

    2012-12-01

    Standards for model interoperability and scientific workflow software provide techniques and tools for coupling hydrological simulation models. However, model builders are yet to realize the benefits of these and continue to write ad hoc implementations and scripts. Three case studies demonstrate different approaches to coupling models, the first using tight interfaces (OpenMI), the second using a scientific workflow system (Trident) and the third using a tailored execution engine (Delft Flood Early Warning System - Delft-FEWS). No approach was objectively better than any other approach. The foremost standard for coupling hydrological models is the Open Modeling Interface (OpenMI), which defines interfaces for models to interact. An implementation of the OpenMI standard involves defining interchange terms and writing a .NET/Java wrapper around the model. An execution wrapper such as OatC.GUI or Pipistrelle executes the models. The team built two OpenMI implementations for eWater Source river system models. Once built, it was easy to swap river system models. The team encountered technical challenges with versions of the .Net framework (3.5 calling 4.0) and with the performance of the execution wrappers when running daily simulations. By design, the OpenMI interfaces are general, leaving significant decisions around the semantics of the interfaces to the implementer. Increasingly, scientific workflow tools such as Kepler, Taverna and Trident are able to replace custom scripts. These tools aim to improve the provenance and reproducibility of processing tasks. In particular, Taverna and the myExperiment website have had success making many bioinformatics workflows reusable and sharable. The team constructed Trident activities for hydrological software including IQQM, REALM and eWater Source. They built an activity generator for model builders to build activities for particular river systems. The models were linked at a simulation level, without any daily time

  17. A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model

    Science.gov (United States)

    Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang

    2017-05-01

    There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.

  18. Changes in spring-mass model parameters and energy cost during track running to exhaustion.

    Science.gov (United States)

    Slawinski, Jean; Heubert, Richard; Quievre, Jacques; Billat, Véronique; Hanon, Christine; Hannon, Christine

    2008-05-01

    The purpose of this study was to determine whether exhaustion modifies the stiffness characteristics, as defined in the spring-mass model, during track running. We also investigated whether stiffer runners are also the most economical. Nine well-trained runners performed an exhaustive exercise over 2000 meters on an indoor track. This exhaustive exercise was preceded by a warm-up and was followed by an active recovery. Throughout all the exercises, the energy cost of running (Cr) was measured. Vertical and leg stiffness was measured with a force plate (Kvert and Kleg, respectively) integrated into the track. The results show that Cr increases significantly after the 2000-meter run (0.192 +/- 0.006 to 0.217 +/- 0.013 mL x kg(-1) x m(-1)). However, Kvert and Kleg remained constant (32.52 +/- 6.42 to 32.59 +/- 5.48 and 11.12 +/- 2.76 to 11.14 +/- 2.48 kN.m, respectively). An inverse correlation was observed between Cr and Kleg, but only during the 2000-meter exercise (r = -0.67; P < or = 0.05). During the warm-up or the recovery, Cr and Kleg, were not correlated (r = 0.354; P = 0.82 and r = 0.21; P = 0.59, respectively). On track, exhaustion induced by a 2000-meter run has no effect on Kleg or Kvert. The inverse correlation was only observed between Cr and Kleg during the 2000-meter run and not before or after the exercise, suggesting that the stiffness of the runner may be not associated with the Cr.

  19. Biosensors for EVA: Muscle Oxygen and pH During Walking, Running and Simulated Reduced Gravity

    Science.gov (United States)

    Lee, S. M. C.; Ellerby, G.; Scott, P.; Stroud, L.; Norcross, J.; Pesholov, B.; Zou, F.; Gernhardt, M.; Soller, B.

    2009-01-01

    During lunar excursions in the EVA suit, real-time measurement of metabolic rate is required to manage consumables and guide activities to ensure safe return to the base. Metabolic rate, or oxygen consumption (VO2), is normally measured from pulmonary parameters but cannot be determined with standard techniques in the oxygen-rich environment of a spacesuit. Our group developed novel near infrared spectroscopic (NIRS) methods to calculate muscle oxygen saturation (SmO2), hematocrit, and pH, and we recently demonstrated that we can use our NIRS sensor to measure VO2 on the leg during cycling. Our NSBRI-funded project is looking to extend this methodology to examine activities which more appropriately represent EVA activities, such as walking and running and to better understand factors that determine the metabolic cost of exercise in both normal and lunar gravity. Our 4 year project specifically addresses risk: ExMC 4.18: Lack of adequate biomedical monitoring capability for Constellation EVA Suits and EPSP risk: Risk of compromised EVA performance and crew health due to inadequate EVA suit systems.

  20. Strange matter and strange stars in a thermodynamically self-consistent perturbation model with running coupling and running strange quark mass

    CERN Document Server

    Xu, J F; Liu, F; Hou, D F; Chen, L W

    2015-01-01

    A quark model with running coupling and running strange quark mass, which is thermodynamically self-consistent at both high and lower densities, is presented and applied to study properties of strange quark matter and structure of compact stars. An additional term to the thermodynamic potential density is determined by meeting the fundamental differential equation of thermodynamics. It plays an important role in comparatively lower density and ignorable at extremely high density, acting as a chemical-potential dependent bag constant. In this thermodynamically enhanced perturbative QCD model, strange quark matter still has the possibility of being absolutely stable, while the pure quark star has a sharp surface with a maximum mass as large as about 2 times the solar mass and a maximum radius of about 11 kilometers.

  1. Impacts of the driver's bounded rationality on the traffic running cost under the car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai

    2016-09-01

    The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.

  2. Modelling and Simulation: An Overview

    NARCIS (Netherlands)

    M.J. McAleer (Michael); F. Chan (Felix); L. Oxley (Les)

    2013-01-01

    textabstractThe papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are bor

  3. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...

  4. Modelling, simulating and optimizing Boilers

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2003-01-01

    of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic- Equation system. Being able to operate...

  5. Input data to run Landis-II

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location...

  6. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  7. Rating global magnetosphere model simulations through statistical data-model comparisons

    Science.gov (United States)

    Ridley, A. J.; De Zeeuw, D. L.; Rastätter, L.

    2016-10-01

    The Community Coordinated Modeling Center (CCMC) was created in 2000 to allow researchers to remotely run simulations and explore the results through online tools. Since that time, over 10,000 simulations have been conducted at CCMC through their runs-on-request service. Many of those simulations have been event studies using global magnetohydrodynamic (MHD) models of the magnetosphere. All of these simulations are available to the general public to explore and utilize. Many of these simulations have had virtual satellites flown through the model to extract the simulation results at the satellite location as a function of time. This study used 662 of these magnetospheric simulations, with a total of 2503 satellite traces, to statistically compare the magnetic field simulated by models to the satellite data. Ratings for each satellite trace were created by comparing the root-mean-square error of the trace with all of the other traces for the given satellite and magnetic field component. The 1-5 ratings, with 5 being the best quality run, are termed "stars." From these star ratings, a few conclusions were made: (1) Simulations tend to have a lower rating for higher levels of activity; (2) there was a clear bias in the Bz component of the simulations at geosynchronous orbit, implying that the models were challenged in simulating the inner magnetospheric dynamics correctly; and (3) the highest performing model included a coupled ring current model, which was about 0.15 stars better on average than the same model without the ring current model coupling.

  8. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  9. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  10. Large Scale Model Test Investigation on Wave Run-Up in Irregular Waves at Slender Piles

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2013-01-01

    from high speed video recordings. Based on the measured run-up heights different types of prediction formulae for run-up in irregular waves were evaluated. In conclusion scale effects on run-up levels seems small except for differences in spray. However, run-up of individual waves is difficult...

  11. Integrating Visualizations into Modeling NEST Simulations.

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  12. Integrating Visualizations into Modeling NEST Simulations

    Directory of Open Access Journals (Sweden)

    Christian eNowke

    2015-12-01

    Full Text Available Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  13. SIMULATION OF COLLECTIVE RISK MODEL

    Directory of Open Access Journals (Sweden)

    Viera Pacáková

    2007-12-01

    Full Text Available The article focuses on providing brief theoretical definitions of the basic terms and methods of modeling and simulations of insurance risks in non-life insurance by means of mathematical and statistical methods using statistical software. While risk assessment of insurance company in connection with its solvency is a rather complex and comprehensible problem, its solution starts with statistical modeling of number and amount of individual claims. Successful solution of these fundamental problems enables solving of curtail problems of insurance such as modeling and simulation of collective risk, premium an reinsurance premium calculation, estimation of probabiliy of ruin etc. The article also presents some essential ideas underlying Monte Carlo methods and their applications to modeling of insurance risk. Solving problem is to find the probability distribution of the collective risk in non-life insurance portfolio. Simulation of the compound distribution function of the aggregate claim amount can be carried out, if the distibution functions of the claim number process and the claim size are assumed given. The Monte Carlo simulation is suitable method to confirm the results of other methods and for treatments of catastrophic claims, when small collectives are studied. Analysis of insurance risks using risk theory is important part of the project Solvency II. Risk theory is analysis of stochastic features of non-life insurance process. The field of application of risk theory has grown rapidly. There is a need to develop the theory into form suitable for practical purposes and demostrate their application. Modern computer simulation techniques open up a wide field of practical applications for risk theory concepts, without requiring the restricive assumptions and sophisticated mathematics. This article presents some comparisons of the traditional actuarial methods and of simulation methods of the collective risk model.

  14. Rasterizing geological models for parallel finite difference simulation using seismic simulation as an example

    Science.gov (United States)

    Zehner, Björn; Hellwig, Olaf; Linke, Maik; Görz, Ines; Buske, Stefan

    2016-01-01

    3D geological underground models are often presented by vector data, such as triangulated networks representing boundaries of geological bodies and geological structures. Since models are to be used for numerical simulations based on the finite difference method, they have to be converted into a representation discretizing the full volume of the model into hexahedral cells. Often the simulations require a high grid resolution and are done using parallel computing. The storage of such a high-resolution raster model would require a large amount of storage space and it is difficult to create such a model using the standard geomodelling packages. Since the raster representation is only required for the calculation, but not for the geometry description, we present an algorithm and concept for rasterizing geological models on the fly for the use in finite difference codes that are parallelized by domain decomposition. As a proof of concept we implemented a rasterizer library and integrated it into seismic simulation software that is run as parallel code on a UNIX cluster using the Message Passing Interface. We can thus run the simulation with realistic and complicated surface-based geological models that are created using 3D geomodelling software, instead of using a simplified representation of the geological subsurface using mathematical functions or geometric primitives. We tested this set-up using an example model that we provide along with the implemented library.

  15. A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2013-01-01

    Hybrid computational architectures based on the joint power of Central Processing Units and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc.. In this paper we present a comparison of performance of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code (HiGPUs) to use for these tests, because this version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed...

  16. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...... to operate a boiler plant dynamically means that the boiler designs must be able to absorb any fluctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as possible. Due...

  17. Intelligent Mobility Modeling and Simulation

    Science.gov (United States)

    2015-03-04

    cog.cs.drexel.edu/act-r/index.html) •Models sensory / motor performance of human driver or teleoperator 27UNCLASSIFIED: Distribution Statement A. Approved for...U.S. ARMY TANK AUTOMOTIVE RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Intelligent Mobility Modeling and Simulation 1 Dr. P. Jayakumar, S. Arepally...Prescribed by ANSI Std Z39-18 Contents 1. Mobility - Autonomy - Latency Relationship 2. Machine - Human Partnership 3. Development of Shared Control

  18. Dilepton constraints in the Inert Doublet Model from Run 1 of the LHC

    CERN Document Server

    Belanger, G; Goudelis, A; Herrmann, B; Kraml, S; Sengupta, D

    2015-01-01

    Searches in final states with two leptons plus missing transverse energy, targeting supersymmetric particles or invisible decays of the Higgs boson, were performed during Run 1 of the LHC. Recasting the results of these analyses in the context of the Inert Doublet Model (IDM) using MadAnalysis 5, we show that they provide constraints on inert scalars that significantly extend previous limits from LEP. Moreover, these LHC constraints allow to test the IDM in the limit of very small Higgs-inert scalar coupling, where the constraints from direct detection of dark matter and the invisible Higgs width vanish.

  19. Comparison of a priori calibration models for respiratory inductance plethysmography during running.

    Science.gov (United States)

    Leutheuser, Heike; Heyde, Christian; Gollhofer, Albert; Eskofier, Bjoern M

    2014-01-01

    Respiratory inductive plethysmography (RIP) has been introduced as an alternative for measuring ventilation by means of body surface displacement (diameter changes in rib cage and abdomen). Using a posteriori calibration, it has been shown that RIP may provide accurate measurements for ventilatory tidal volume under exercise conditions. Methods for a priori calibration would facilitate the application of RIP. Currently, to the best knowledge of the authors, none of the existing ambulant procedures for RIP calibration can be used a priori for valid subsequent measurements of ventilatory volume under exercise conditions. The purpose of this study is to develop and validate a priori calibration algorithms for ambulant application of RIP data recorded in running exercise. We calculated Volume Motion Coefficients (VMCs) using seven different models on resting data and compared the root mean squared error (RMSE) of each model applied on running data. Least squares approximation (LSQ) without offset of a two-degree-of-freedom model achieved the lowest RMSE value. In this work, we showed that a priori calibration of RIP exercise data is possible using VMCs calculated from 5 min resting phase where RIP and flowmeter measurements were performed simultaneously. The results demonstrate that RIP has the potential for usage in ambulant applications.

  20. Climate simulations for 1880-2003 with GISS modelE

    CERN Document Server

    Hansen, J; Bauer, S; Baum, E; Cairns, B; Canuto, V; Chandler, M; Cheng, Y; Cohen, A; Faluvegi, G; Fleming, E; Friend, A; Genio, A D; Hall, T; Jackman, C; Jonas, J; Kelley, M; Kharecha, P; Kiang, N Y; Koch, D; Labow, G; Lacis, A; Lerner, J; Lo, K; Menon, S; Miller, R; Nazarenko, L; Novakov, T; Oinas, V; Perlwitz, J; Rind, D; Romanou, A; Ruedy, R; Russell, G; Sato, M; Schmidt, G A; Schmunk, R; Shindell, D; Stone, P; Streets, D; Sun, S; Tausnev, N; Thresher, D; Unger, N; Yao, M; Zhang, S; Perlwitz, Ja.; Perlwitz, Ju.

    2006-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies...

  1. Hubble expansion and structure formation in the "running FLRW model" of the cosmic evolution

    CERN Document Server

    Grande, Javier; Basilakos, Spyros; Plionis, Manolis

    2011-01-01

    A new class of FLRW cosmological models with time-evolving fundamental parameters should emerge naturally from a description of the expansion of the universe based on the first principles of quantum field theory and string theory. Within this general paradigm, one expects that both the gravitational Newton's coupling, G, and the cosmological term, Lambda, should not be strictly constant but appear rather as smooth functions of the Hubble rate. This scenario ("running FLRW model") predicts, in a natural way, the existence of dynamical dark energy without invoking the participation of extraneous scalar fields. In this paper, we perform a detailed study of these models in the light of the latest cosmological data, which serves to illustrate the phenomenological viability of the new dark energy paradigm as a serious alternative to the traditional scalar field approaches. By performing a joint likelihood analysis of the recent SNIa data, the CMB shift parameter, and the BAOs traced by the Sloan Digital Sky Survey,...

  2. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...... size. The model has been formulated with a specied building-up of the pressure during the start-up of the plant, i.e. the steam production during start-up of the boiler is output from the model. The steam outputs together with requirements with respect to steam space load have been utilized to dene...

  3. Modeling and Simulation of Nanoindentation

    Science.gov (United States)

    Huang, Sixie; Zhou, Caizhi

    2017-08-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  4. Multiscale Stochastic Simulation and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    James Glimm; Xiaolin Li

    2006-01-10

    Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.

  5. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  6. Animal models for simulating weightlessness

    Science.gov (United States)

    Morey-Holton, E.; Wronski, T. J.

    1982-01-01

    NASA has developed a rat model to simulate on earth some aspects of the weightlessness alterations experienced in space, i.e., unloading and fluid shifts. Comparison of data collected from space flight and from the head-down rat suspension model suggests that this model system reproduces many of the physiological alterations induced by space flight. Data from various versions of the rat model are virtually identical for the same parameters; thus, modifications of the model for acute, chronic, or metabolic studies do not alter the results as long as the critical components of the model are maintained, i.e., a cephalad shift of fluids and/or unloading of the rear limbs.

  7. Parameterization of rockfall source areas and magnitudes with ecological recorders: When disturbances in trees serve the calibration and validation of simulation runs

    Science.gov (United States)

    Corona, Christophe; Trappmann, Daniel; Stoffel, Markus

    2013-11-01

    On forested talus slopes which have been build up by rockfall, a strong interaction exists between the trees and the falling rocks. While the presence and density of vegetation have a profound influence on rockfall activity, the occurrence of the latter will also exert control on the presence, vitality, species composition, and age distribution of forest stands. This paper exploits the interactions between biotic (tree growth) and abiotic (rockfall) processes in a mountain forest to gather and obtain reliable input data on rockfall for the 3D process based simulation model RockyFor3D. We demonstrate that differences between the simulated and observed numbers of tree impacts can be minimized through (i) a careful definition of active source areas and (ii) a weighted distribution of block sizes as observed in the field. As a result of this field-based, optimized configuration, highly significant values can be obtained with RockyFor3D for the number of impacts per tree, so that results of the model runs can be converted with a high degree of certainty into real frequencies. The combination of the field-based dendrogeomorphic with the modeling approaches is seen as a significant advance for hazard mapping as it allows a reliable and highly-resolved spatial characterization of rockfall frequencies and a realistic representation of (past) rockfall dynamics at the slope scale.

  8. Changes in spring-mass model characteristics during repeated running sprints.

    Science.gov (United States)

    Girard, Olivier; Micallef, Jean-Paul; Millet, Grégoire P

    2011-01-01

    This study investigated fatigue-induced changes in spring-mass model characteristics during repeated running sprints. Sixteen active subjects performed 12 × 40 m sprints interspersed with 30 s of passive recovery. Vertical and anterior-posterior ground reaction forces were measured at 5-10 m and 30-35 m and used to determine spring-mass model characteristics. Contact (P Stride frequency (P  0.05) increased with time. As a result, vertical stiffness decreased (P  0.05). Changes in vertical stiffness were correlated (r > 0.7; P stride frequency. When compared to 5-10 m, most of ground reaction force-related parameters were higher (P stride frequency, vertical and leg stiffness were lower (P run-based sprints are repeated, which alters impact parameters. Maintaining faster stride frequencies through retaining higher vertical stiffness is a prerequisite to improve performance during repeated sprinting.

  9. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  10. Simulation Tool for Inventory Models: SIMIN

    OpenAIRE

    Pratiksha Saxen; Tulsi Kushwaha

    2014-01-01

    In this paper, an integrated simulation optimization model for the inventory system is developed. An effective algorithm is developed to evaluate and analyze the back-end stored simulation results. This paper proposes simulation tool SIMIN (Inventory Simulation) to simulate inventory models. SIMIN is a tool which simulates and compares the results of different inventory models. To overcome various practical restrictive assumptions, SIMIN provides values for a number of performance measurement...

  11. Run off-on-out method and models for soil infiltrability on hill-slope under rainfall conditions

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The soil infiltrability of hill-slope is important to such studies and practices as hydrological process, crop water supply, irrigation practices, and soil erosion. A new method for measuring soil infiltrability on hill-slope under rainfall condition with run off-on-out was advanced. Based on water (mass) balance, the mathematic models for soil infiltrability estimated from the advances of runoff on soil surface and the water running out of the slope were derived. Experiments of 2 cases were conducted. Case I was done under a rainfall intensity of 20 mm/h, at a slope gradient of about 0° with a runoff/on length (area) ratio of 1 : 1. Case II was under a rainfall intensity of 60 mm/h and a slope of 20° with a runoff/on length (area) ratio of 1 : 1. Double ring method was also used to measure the infiltrability for comparison purposes. The experiments were done with soil moisture of 10%. Required data were collected from laboratory experiments. The infiltrability curves were computed from the experimental data. The results indicate that the method can well conceptually represent the transient infiltrability process, with capability to simulate the very high initial soil infiltrability. The rationalities of the method and the models were validated. The errors of the method for the two cases were 1.82%/1.39% and 4.49%/3.529% (Experimental/Model) respectively, as estimated by comparing the rainfall amount with the infiltrated volume, to demonstrate the accuracy of the method. The transient and steady infiltrability measured with double ring was much lower than those with this new method, due to water supply limit and soil aggregates breaking down at initial infiltration stage. The method can overcome the short backs of the traditional sprinkler method and double ring method for soil infiltraility. It can be used to measure the infiltrability of sloped surface under rainfall-runoff-erosion conditions, in the related studies.

  12. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  13. Statistical 3D damage accumulation model for ion implant simulators

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Mangas, J.M. E-mail: jesman@ele.uva.es; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M

    2003-04-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  14. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  15. Effects of intermediate scales on renormalization group running of fermion observables in an SO(10) model

    CERN Document Server

    Meloni, Davide; Riad, Stella

    2014-01-01

    In the context of non-supersymmetric SO(10) models, we analyze the renormalization group equations for the fermions (including neutrinos) from the GUT energy scale down to the electroweak energy scale, explicitly taking into account the effects of an intermediate energy scale induced by a Pati--Salam gauge group. To determine the renormalization group running, we use a numerical minimization procedure based on a nested sampling algorithm that randomly generates the values of 19 model parameters at the GUT scale, evolves them, and finally constructs the values of the physical observables and compares them to the existing experimental data at the electroweak scale. We show that the evolved fermion masses and mixings present sizable deviations from the values obtained without including the effects of the intermediate scale.

  16. Effects of intermediate scales on renormalization group running of fermion observables in an SO(10) model

    Science.gov (United States)

    Meloni, Davide; Ohlsson, Tommy; Riad, Stella

    2014-12-01

    In the context of non-supersymmetric SO(10) models, we analyze the renormalization group equations for the fermions (including neutrinos) from the GUT energy scale down to the electroweak energy scale, explicitly taking into account the effects of an intermediate energy scale induced by a Pati-Salam gauge group. To determine the renormalization group running, we use a numerical minimization procedure based on a nested sampling algorithm that randomly generates the values of 19 model parameters at the GUT scale, evolves them, and finally constructs the values of the physical observables and compares them to the existing experimental data at the electroweak scale. We show that the evolved fermion masses and mixings present sizable deviations from the values obtained without including the effects of the intermediate scale.

  17. Minkowski space pion model inspired by lattice QCD running quark mass

    Science.gov (United States)

    Mello, Clayton S.; de Melo, J. P. B. C.; Frederico, T.

    2017-03-01

    The pion structure in Minkowski space is described in terms of an analytic model of the Bethe-Salpeter amplitude combined with Euclidean Lattice QCD results. The model is physically motivated to take into account the running quark mass, which is fitted to Lattice QCD data. The pion pseudoscalar vertex is associated to the quark mass function, as dictated by dynamical chiral symmetry breaking requirements in the limit of vanishing current quark mass. The quark propagator is analyzed in terms of a spectral representation, and it shows a violation of the positivity constraints. The integral representation of the pion Bethe-Salpeter amplitude is also built. The pion space-like electromagnetic form factor is calculated with a quark electromagnetic current, which satisfies the Ward-Takahashi identity to ensure current conservation. The results for the form factor and weak decay constant are found to be consistent with the experimental data.

  18. Standard for Models and Simulations

    Science.gov (United States)

    Steele, Martin J.

    2016-01-01

    This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.

  19. Classically conformal U(1 ) ' extended standard model, electroweak vacuum stability, and LHC Run-2 bounds

    Science.gov (United States)

    Das, Arindam; Oda, Satsuki; Okada, Nobuchika; Takahashi, Dai-suke

    2016-06-01

    We consider the minimal U(1 ) ' extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1 ) ' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1 ) ' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1 ) ' gauge symmetry is broken by the Coleman-Weinberg mechanism, generating the mass terms of the U(1 ) ' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1 ) ' Higgs field and the SM Higgs doublet field, the radiative U(1 ) ' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark (mt=173.34 GeV ) and the Higgs boson (mh=125.09 GeV ), we perform parameter scans to identify the parameter region for resolving the electroweak vacuum instability problem. Next we interpret the recent ATLAS and CMS search limits at the LHC Run-2 for the sequential Z' boson to constrain the parameter region in our model. Combining the constraints from the electroweak vacuum stability and the LHC Run-2 results, we find a bound on the Z' boson mass as mZ'≳3.5 TeV . We also calculate self-energy corrections to the SM Higgs doublet field through the heavy states, the right-handed neutrinos and the Z' boson, and find the naturalness bound as mZ'≲7 TeV , in order to reproduce the right electroweak scale for the fine-tuning level better than 10%. The resultant mass range of 3.5 TeV ≲mZ'≲7 TeV will be explored at the LHC Run-2 in the near future.

  20. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance, a co....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence.......A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance......, a correct spectral shape, and non-Gaussian statistics, is selected in order to evaluate the model turbulence. An actual turbulence record is analyzed in detail providing both a standard for comparison and input statistics for the generalized spectral analysis, which in turn produces a set of orthonormal...

  1. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  2. Run-based multi-model interannual variability assessment of precipitation and temperature over Pakistan using two IPCC AR4-based AOGCMs

    Science.gov (United States)

    Asmat, U.; Athar, H.

    2017-01-01

    The interannual variability of precipitation and temperature is derived from all runs of the Intergovernmental Panel on Climate Change (IPCC) fourth Assessment Report (AR4)-based two Atmospheric Oceanic General Circulation Model (AOGCM) simulations, over Pakistan, on an annual basis. The models are the CM2.0 and CM2.1 versions of Geophysical Fluid Dynamics Laboratory (GFDL)-based AOGCM. Simulations for a recent 22-year period (1979-2000) are validated using Climate Research Unit (CRU) and NCEP/NCAR datasets over Pakistan, for the first time. The study area of Pakistan is divided into three regions: all Pakistan, northern Pakistan, and southern Pakistan. Bias, root mean square error, one sigma standard deviation, and coefficient of variance are used as validation metrics. For all Pakistan and northern Pakistan, all three runs of GFDL-CM2.0 perform better under the above metrics, both for precipitation and temperature (except for one sigma standard deviation and coefficient of variance), whereas for southern Pakistan, third run of GFDL-CM2.1 perform better expect for the root mean square error for temperature. A mean and variance-based bias correction is applied to bias in modeled precipitation and temperature variables. This resulted in a reduced bias, except for the months of June, July, and August, when the reduction in bias is relatively lower.

  3. Computer Simulation Study of Human Locomotion with a Three-Dimensional Entire-Body Neuro-Musculo-Skeletal Model

    Science.gov (United States)

    Hase, Kazunori; Yokoi, Takashi

    In the present study, the computer simulation technique to autonomously generate running motion from walking was developed using a three-dimensional entire-body neuro-musculo-skeletal model. When maximizing locomotive speed was employed as the evaluative criterion, the initial walking pattern could not transition to a valid running motion. When minimizing the period of foot-ground contact was added to this evaluative criterion, the simulation model autonomously produced appropriate three-dimensional running. Changes in the neuronal system showed the fatigue coefficient of the neural oscillators to reduce as locomotion patterns transitioned from walking to running. Then, when the running speed increased, the amplitude of the non-specific stimulus from the higher center increased. These two changes indicate mean that the improvement in responsiveness of the neuronal system is important for the transition process from walking to running, and that the comprehensive activation level of the neuronal system is essential in the process of increasing running speed.

  4. Modeling and simulation of LHC beam-based collimator setup

    CERN Document Server

    Valentino, G; Assmann, R W; Burkart, F; Redaelli, S; Rossi, A; Lari, L

    2012-01-01

    In the 2011 Large Hadron Collider run, collimators were aligned for proton and heavy ion beams using a semiautomatic setup algorithm. The algorithm provided a reduction in the beam time required for setup, an elimination of beam dumps during setup and better reproducibility with respect to manual alignment. A collimator setup simulator was developed based on a Gaussian model of the beam distribution as well as a parametric model of the beam losses. A time-varying beam loss signal can be simulated for a given collimator movement into the beam. The simulation results and comparison to measurement data obtained during collimator setups and dedicated fills for beam halo scraping are presented. The simulator will then be used to develop a fully automatic collimator alignment algorithm.

  5. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  6. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  7. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    Science.gov (United States)

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry (P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations.NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with increased in-series surface stiffness, biological limb stiffness for athletes with unilateral

  8. Fractal model for simulation of frost formation and growth

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A planar fractal model for simulation of frost formation and growth was proposed based on diffusion limited aggregation(DLA)model and the computational simulation was carried out in this paper.By changing the times of program running circulation and the ratio of random particles generated,the simulation figures were gained under different conditions.A microscope is used to observe the shape and structure of frost layer and a digital camera with high resolution is used to record the pattern of frost layer at different time.Through comparing the simulation figures with the experimental images,we find that the simulation results agree well with the experimental images in shape and the fractal dimension of simulation figures is nearly equal to that of experimental images.The results indicate that it is reasonable to represent frost layer growth time with the program circulation times and to simulate the frost layer density variation during its growth process by reducing the random particle generation probability.The feasibility of using the suggested model to simulate the process of frost formation and growth was justified.The insufficiencies and its causes of this fractal model are also discussed.

  9. Analysis of the traditional vehicle’s running cost and the electric vehicle’s running cost under car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Xu, Ke-Wei; Yang, Shi-Chun; Shang, Hua-Yan

    2016-03-01

    In this paper, we use car-following theory to study the traditional vehicle’s running cost and the electric vehicle’s running cost. The numerical results illustrate that the traditional vehicle’s running cost is larger than that of the electric vehicle and that the system’s total running cost drops with the increase of the electric vehicle’s proportion, which shows that the electric vehicle is better than the traditional vehicle from the perspective of the running cost.

  10. Study of the ion kinetic effects in ICF run-away burn using a quasi-1D hybrid model

    Science.gov (United States)

    Huang, C.-K.; Molvig, K.; Albright, B. J.; Dodd, E. S.; Vold, E. L.; Kagan, G.; Hoffman, N. M.

    2017-02-01

    The loss of fuel ions in the Gamow peak and other kinetic effects related to the α particles during ignition, run-away burn, and disassembly stages of an inertial confinement fusion D-T capsule are investigated with a quasi-1D hybrid volume ignition model that includes kinetic ions, fluid electrons, Planckian radiation photons, and a metallic pusher. The fuel ion loss due to the Knudsen effect at the fuel-pusher interface is accounted for by a local-loss model by Molvig et al. [Phys. Rev. Lett. 109, 095001 (2012)] with an albedo model for ions returning from the pusher wall. The tail refilling and relaxation of the fuel ion distribution are captured with a nonlinear Fokker-Planck solver. Alpha heating of the fuel ions is modeled kinetically while simple models for finite alpha range and electron heating are used. This dynamical model is benchmarked with a 3 T hydrodynamic burn model employing similar assumptions. For an energetic pusher (˜40 kJ) that compresses the fuel to an areal density of ˜1.07 g/cm 2 at ignition, the simulation shows that the Knudsen effect can substantially limit ion temperature rise in runaway burn. While the final yield decreases modestly from kinetic effects of the α particles, large reduction of the fuel reactivity during ignition and runaway burn may require a higher Knudsen loss rate compared to the rise time of the temperatures above ˜25 keV when the broad D-T Gamow peak merges into the bulk Maxwellian distribution.

  11. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... size. The model has been formulated with a specied building-up of the pressure during the start-up of the plant, i.e. the steam production during start-up of the boiler is output from the model. The steam outputs together with requirements with respect to steam space load have been utilized to dene...... of the boiler is (with an acceptable accuracy) proportional with the volume of the boiler. For the dynamic operation capability a cost function penalizing limited dynamic operation capability and vise-versa has been dened. The main idea is that it by mean of the parameters in this function is possible to t its...

  12. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    Science.gov (United States)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  13. Building Simulation Modelers are we big-data ready?

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL

    2014-01-01

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performance simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical

  14. Keeping It Real: Revisiting a Real-Space Approach to Running Ensembles of Cosmological N-body Simulations

    CERN Document Server

    Orban, Chris

    2012-01-01

    In setting up initial conditions for cosmological N-body simulations there are, fundamentally, two choices: either maximizing the correspondence of the initial density field to the assumed fourier-space clustering or, instead, matching to the real-space clustering. As a stringent test of both approaches, I perform ensembles of simulations using power law models and exploit the self-similarity of these initial conditions to quantify the accuracy of the results. Originally proposed by Pen 1997 and implemented by Sirko 2005, I show that the real-space motivated approach, which allows the DC mode to vary, performs well in exhibiting the expected self-similar behavior in the mean xi(r) and P(k) and in both methods this behavior extends below the scale of the initial mean interparticle spacing. I also test the real-space method with simulations of a simplified, powerlaw model for baryon acoustic oscillations, again with success, and mindful of the need to generate mock catalogs using simulations I show extensive po...

  15. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  16. Uterine Contraction Modeling and Simulation

    Science.gov (United States)

    Liu, Miao; Belfore, Lee A.; Shen, Yuzhong; Scerbo, Mark W.

    2010-01-01

    Building a training system for medical personnel to properly interpret fetal heart rate tracing requires developing accurate models that can relate various signal patterns to certain pathologies. In addition to modeling the fetal heart rate signal itself, the change of uterine pressure that bears strong relation to fetal heart rate and provides indications of maternal and fetal status should also be considered. In this work, we have developed a group of parametric models to simulate uterine contractions during labor and delivery. Through analysis of real patient records, we propose to model uterine contraction signals by three major components: regular contractions, impulsive noise caused by fetal movements, and low amplitude noise invoked by maternal breathing and measuring apparatus. The regular contractions are modeled by an asymmetric generalized Gaussian function and least squares estimation is used to compute the parameter values of the asymmetric generalized Gaussian function based on uterine contractions of real patients. Regular contractions are detected based on thresholding and derivative analysis of uterine contractions. Impulsive noise caused by fetal movements and low amplitude noise by maternal breathing and measuring apparatus are modeled by rational polynomial functions and Perlin noise, respectively. Experiment results show the synthesized uterine contractions can mimic the real uterine contractions realistically, demonstrating the effectiveness of the proposed algorithm.

  17. Self-Consistent Modeling of Reionization in Cosmological Hydrodynamical Simulations

    CERN Document Server

    Oñorbe, Jose; Lukić, Zarija

    2016-01-01

    The ultraviolet background (UVB) emitted by quasars and galaxies governs the ionization and thermal state of the intergalactic medium (IGM), regulates the formation of high-redshift galaxies, and is thus a key quantity for modeling cosmic reionization. The vast majority of cosmological hydrodynamical simulations implement the UVB via a set of spatially uniform photoionization and photoheating rates derived from UVB synthesis models. We show that simulations using canonical UVB rates reionize, and perhaps more importantly, spuriously heat the IGM, much earlier z ~ 15 than they should. This problem arises because at z > 6, where observational constraints are non-existent, the UVB amplitude is far too high. We introduce a new methodology to remedy this issue, and generate self-consistent photoionization and photoheating rates to model any chosen reionization history. Following this approach, we run a suite of hydrodynamical simulations of different reionization scenarios, and explore the impact of the timing of ...

  18. Computational challenges in modeling and simulating living matter

    Science.gov (United States)

    Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling

    2016-12-01

    Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.

  19. A "Necklace" Model for Vesicles Simulations in 2D

    CERN Document Server

    Ismail, Mourad

    2012-01-01

    The aim of this paper is to propose a new numerical model to simulate 2D vesicles interacting with a newtonian fluid. The inextensible membrane is modeled by a chain of circular rigid particles which are maintained in cohesion by using two different type of forces. First, a spring force is imposed between neighboring particles in the chain. Second, in order to model the bending of the membrane, each triplet of successive particles is submitted to an angular force. Numerical simulations of vesicles in shear flow have been run using Finite Element Method and the FreeFem++[1] software. Exploring different ratios of inner and outer viscosities, we recover the well known "Tank-Treading" and "Tumbling" motions predicted by theory and experiments. Moreover, for the first time, 2D simulations of the "Vacillating-Breathing" regime predicted by theory in [2] and observed experimentally in [3] are done without special ingredient like for example thermal fluctuations used in [4].

  20. 一种仿人机器人跑步状态分析模型%A Running State Analysis Model for Humanoid Robot

    Institute of Scientific and Technical Information of China (English)

    王险峰; 洪炳镕; 朴松昊; 钟秋波

    2011-01-01

    In this paper, according to the dynamics of running humanoid robot, a probability model of running state analysis for humanoid robot is proposed based on the feedback of virtual acceleration sensor. Inertial force affects the running state of humanoid robot during the course of running. The value of acceleration can express inertial force. So we can obtain dynamic feedback from the virtual acceleration sensor built in humanoid robot to illustrate the running state of humanoid robot, and can analyse dynamic feedback from virtual acceleration sensor by using wavelet transform and fast Fourier transform. The probability model of running state analysis for humanoid robot is formulated by energy eigenvalue abstracted in freqency field. Using Mahalanobis distance as a criteria for stable running of humanoid robot, this model can express humanoid robot running state quantitatively. Simulation is conduct for humanoid robot model built with ADAMS, and the virtual acceleration sensor is built in the center of mass for humanoid robot. The experimental results show that this model is able to describe the running of humanoid robot and express the running state of humanoid robot during the course of running including start gait and stop gait, and it can help humanoid robot adjust their gaits with the change of environment to ensure their running stability.%依据仿人机器人跑步的动力学特性,通过对仿人机器人虚拟加速度传感器输出的信号进行分析,建立了仿人机器人跑步相关特征值的概率模型.针对仿人机器人的结构,分析了在整个跑步过程中惯性力和弯矩的作用,对跑步状态的影响,获取虚拟加速度传感器输出的信号,采用小波变换分析动态信号,同时进行快速傅里叶变换,在频域上提取能量特征值.使用马氏距离作为稳定跑步的判定标准,并给出了定量描述,在ADAMS软件中搭建仿人机器人,虚拟加速度传感器设置在质心处,进行跑步仿真实

  1. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  2. Applications of Joint Tactical Simulation Modeling

    Science.gov (United States)

    1997-12-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS APPLICATIONS OF JOINT TACTICAL SIMULATION MODELING by Steve VanLandingham December 1997...SUBTITLE APPLICATIONS OF JOINT TACTICAL SIMULATION MODELING 5. FUNDING NUMBERS 6. AUTHOR(S) VanLandingham, Steve 7. PERFORMING ORGANIZATION NAME(S...release; distribution is unlimited. APPLICATIONS OF JOINT TACTICAL SIMULATION MODELING Steve VanLandingham Lieutenant, United States Navy B.S

  3. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  4. SWEEPOP a simulation model for Target Simulation Mode minesweeping

    NARCIS (Netherlands)

    Keus, H.E.; Beckers, A.L.D.; Cleophas, P.L.H.

    2005-01-01

    SWEEPOP is a flexible model that simulates the physical interaction between objects in a maritime underwater environment. The model was built to analyse the deployment and the performance of a Target Simulation Mode (TSM) minesweeping system for the Royal Netherlands Navy (RNLN) and to support its p

  5. Implementation of angular response function modeling in SPECT simulations with GATE

    Energy Technology Data Exchange (ETDEWEB)

    Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)

    2010-05-07

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  6. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    Science.gov (United States)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  7. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  8. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  9. Simulation and Big Data Challenges in Tuning Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL

    2013-01-01

    EnergyPlus is the flagship building energy simulation software used to model whole building energy consumption for residential and commercial establishments. A typical input to the program often has hundreds, sometimes thousands of parameters which are typically tweaked by a buildings expert to get it right . This process can sometimes take months. Autotune is an ongoing research effort employing machine learning techniques to automate the tuning of the input parameters for an EnergyPlus input description of a building. Even with automation, the computational challenge faced to run the tuning simulation ensemble is daunting and requires the use of supercomputers to make it tractable in time. In this proposal, we describe the scope of the problem, the technical challenges faced and overcome, the machine learning techniques developed and employed, and the software infrastructure developed/in development when taking the EnergyPlus engine, which was primarily designed to run on desktops, and scaling it to run on shared memory supercomputers (Nautilus) and distributed memory supercomputers (Frost and Titan). The parametric simulations produce data in the order of tens to a couple of hundred terabytes.We describe the approaches employed to streamline and reduce bottlenecks in the workflow for this data, which is subsequently being made available for the tuning effort as well as made available publicly for open-science.

  10. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  11. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  12. Assessing the debris flow run-out frequency of a catchment in the French Alps using a parameterization analysis with the RAMMS numerical run-out model

    NARCIS (Netherlands)

    Hussin, Y.A.; Quan Luna, B.; Van Westen, C.J.; Christen, M.; Malet, J.P.; Asch, Th.W.J. van

    2012-01-01

    Debris flows occurring in the European Alps frequently cause significant damage to settlements, power-lines and transportation infrastructure which has led to traffic disruptions, economic loss and even death. Estimating the debris flow run-out extent and the parameter uncertainty related to run-out

  13. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    Science.gov (United States)

    2013-10-01

    are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...Yarlagadda 19b. TELEPHONE NUMBER (include area code ) 302-831-4941 Standard Form 298 (Rev. 8-98) :..-.,.... „.,<-. C((j Z39.18 •MWl^ MONTHLY

  14. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    Science.gov (United States)

    2013-09-09

    SUPPLEMENTARY NOTES 14. ABSTRACT -Quarter-symmetric model is used in AutoDyn to simulate DoP experiments on aluminum targets and ceramic-faced aluminum...targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH, Aluminum 5083, SiC, DoP Expeminets

  15. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our understan...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...

  16. Effects of independently altering body weight and mass on the energetic cost of a human running model.

    Science.gov (United States)

    Ackerman, Jeffrey; Seipel, Justin

    2016-03-21

    The mechanisms underlying the metabolic cost of running, and legged locomotion in general, remain to be well understood. Prior experimental studies show that the metabolic cost of human running correlates well with the vertical force generated to support body weight, the mechanical work done, and changes in the effective leg stiffness. Further, previous work shows that the metabolic cost of running decreases with decreasing body weight, increases with increasing body weight and mass, and does not significantly change with changing body mass alone. In the present study, we seek to uncover the basic mechanism underlying this existing experimental data. We find that an actuated spring-mass mechanism representing the effective mechanics of human running provides a mechanistic explanation for the previously reported changes in the metabolic cost of human running if the dimensionless relative leg stiffness (effective stiffness normalized by body weight and leg length) is regulated to be constant. The model presented in this paper provides a mechanical explanation for the changes in metabolic cost due to changing body weight and mass which have been previously measured experimentally and highlights the importance of active leg stiffness regulation during human running.

  17. Climate sensitivity runs and regional hydrologic modeling for predicting the response of the greater Florida Everglades ecosystem to climate change.

    Science.gov (United States)

    Obeysekera, Jayantha; Barnes, Jenifer; Nungesser, Martha

    2015-04-01

    It is important to understand the vulnerability of the water management system in south Florida and to determine the resilience and robustness of greater Everglades restoration plans under future climate change. The current climate models, at both global and regional scales, are not ready to deliver specific climatic datasets for water resources investigations involving future plans and therefore a scenario based approach was adopted for this first study in restoration planning. We focused on the general implications of potential changes in future temperature and associated changes in evapotranspiration, precipitation, and sea levels at the regional boundary. From these, we developed a set of six climate and sea level scenarios, used them to simulate the hydrologic response of the greater Everglades region including agricultural, urban, and natural areas, and compared the results to those from a base run of current conditions. The scenarios included a 1.5 °C increase in temperature, ±10 % change in precipitation, and a 0.46 m (1.5 feet) increase in sea level for the 50-year planning horizon. The results suggested that, depending on the rainfall and temperature scenario, there would be significant changes in water budgets, ecosystem performance, and in water supply demands met. The increased sea level scenarios also show that the ground water levels would increase significantly with associated implications for flood protection in the urbanized areas of southeastern Florida.

  18. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against a num...

  19. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against a num...

  20. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  1. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  2. Dynamics Modeling of Heavy Special Driving Simulator

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Based on the dynamical characteristic parameters of the real vehicle, the modeling approach and procedure of dynamics of vehicles are expatiated. The layout of vehicle dynamics is proposed, and the sub-models of the diesel engine, drivetrain system and vehicle multi-body dynamics are introduced. Finally, the running characteristic data of the virtual and real vehicles are compared, which shows that the dynamics model is similar closely to the real vehicle system.

  3. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  4. Keeping the noise down: common random numbers for disease simulation modeling.

    Science.gov (United States)

    Stout, Natasha K; Goldie, Sue J

    2008-12-01

    Disease simulation models are used to conduct decision analyses of the comparative benefits and risks associated with preventive and treatment strategies. To address increasing model complexity and computational intensity, modelers use variance reduction techniques to reduce stochastic noise and improve computational efficiency. One technique, common random numbers, further allows modelers to conduct counterfactual-like analyses with direct computation of statistics at the individual level. This technique uses synchronized random numbers across model runs to induce correlation in model output thereby making differences easier to distinguish as well as simulating identical individuals across model runs. We provide a tutorial introduction and demonstrate the application of common random numbers in an individual-level simulation model of the epidemiology of breast cancer.

  5. GPU accelerated numerical simulations of viscoelastic phase separation model.

    Science.gov (United States)

    Yang, Keda; Su, Jiaye; Guo, Hongxia

    2012-07-05

    We introduce a complete implementation of viscoelastic model for numerical simulations of the phase separation kinetics in dynamic asymmetry systems such as polymer blends and polymer solutions on a graphics processing unit (GPU) by CUDA language and discuss algorithms and optimizations in details. From studies of a polymer solution, we show that the GPU-based implementation can predict correctly the accepted results and provide about 190 times speedup over a single central processing unit (CPU). Further accuracy analysis demonstrates that both the single and the double precision calculations on the GPU are sufficient to produce high-quality results in numerical simulations of viscoelastic model. Therefore, the GPU-based viscoelastic model is very promising for studying many phase separation processes of experimental and theoretical interests that often take place on the large length and time scales and are not easily addressed by a conventional implementation running on a single CPU.

  6. Web服务化的分布仿真运行支撑环境%Run-time infrastructure of distributed simulation based on Web Services

    Institute of Scientific and Technical Information of China (English)

    吴泽彬; 吴慧中; 李蔚清

    2009-01-01

    为提高高层体系结构的重用性和互操作性,对传统高层体系结构分布仿真模型进行去耦处理,引入仿真应用层和仿真通讯层,把仿真模型和本地运行支撑环境组件分离开来,提出低耦合的高层体系结构分布仿真模型和基于web服务的高层体系结构分布仿真体系结构.据此进行Web服务化运行支撑环境的设计和原型系统开发,并给出其意义和相关性能分析.使用基于超文本传输协议的简单对象访问协议,使高层体系结构兼容的仿真联邦成员能够与运行时间框架在广域网、局域网等各类网络上通讯,屏蔽防火墙等安全限制对通讯的影响.提出运行时间框架和仿真成员Web服务化的思路,为实现仿真应用动态组合提供技术支撑.在一定的实时性能损失前提下,Web服务化的运行时间框架能够大大提高高层体系结构分布仿真的重用性和互操作性.研究结果表明,Web服务化的运行支撑环境适用于广域网范围下粗粒度的分布仿真应用.%To improve reusability and interoperability of High Level Architecture(HLA),traditional HLA distributed simulation model was decoupled by using simulation application layer and simulation communication layer to separate simulation model and local Run-Time lnfrastructure(RTI)component.A model of HLA distributed simulation with low coupling and HLA distributed simulation architecture based on Web Services was proposed.RTI based on Web Services(RTI-WS)was designed and its prototype was developed,the significance and corresponding performanee analysis was explained subsequently.HLA-compatible federates could communicate with RTI on both wide area network and local area network by Simple Object Access Protocol(SOAP)/HyperText Transport Protocol(HTTP)through most firewalls.RTl and federates were deployed as Web Services and could be integrated easily over the In ternet.RTI-WS could enhance the reusability and

  7. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2014-02-01

    Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  8. A Two-Stage Simulated Annealing Algorithm for the Many-to-Many Milk-Run Routing Problem with Pipeline Inventory Cost

    Directory of Open Access Journals (Sweden)

    Yu Lin

    2015-01-01

    Full Text Available In recent years, logistics systems with multiple suppliers and plants in neighboring regions have been flourishing worldwide. However, high logistics costs remain a problem for such systems due to lack of information sharing and cooperation. This paper proposes an extended mathematical model that minimizes transportation and pipeline inventory costs via the many-to-many Milk-run routing mode. Because the problem is NP hard, a two-stage heuristic algorithm is developed by comprehensively considering its characteristics. More specifically, an initial satisfactory solution is generated in the first stage through a greedy heuristic algorithm to minimize the total number of vehicle service nodes and the best insertion heuristic algorithm to determine each vehicle’s route. Then, a simulated annealing algorithm (SA with limited search scope is used to improve the initial satisfactory solution. Thirty numerical examples are employed to test the proposed algorithms. The experiment results demonstrate the effectiveness of this algorithm. Further, the superiority of the many-to-many transportation mode over other modes is demonstrated via two case studies.

  9. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  10. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  11. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  12. The Millennium Run Observatory: First Light

    CERN Document Server

    Overzier, R; Angulo, R E; Bertin, E; Blaizot, J; Henriques, B M B; Marleau, G -D; White, S D M

    2012-01-01

    Simulations of galaxy evolution aim to capture our current understanding as well as to make predictions for testing by future experiments. Simulations and observations are often compared in an indirect fashion: physical quantities are estimated from the data and compared to models. However, many applications can benefit from a more direct approach, where the observing process is also simulated and the models are seen fully from the observer's perspective. To facilitate this, we have developed the Millennium Run Observatory (MRObs), a theoretical virtual observatory which uses virtual telescopes to `observe' semi-analytic galaxy formation models based on the suite of Millennium Run dark matter simulations. The MRObs produces data that can be processed and analyzed using the standard software packages developed for real observations. At present, we produce images in forty filters from the rest-frame UV to IR for two stellar population synthesis models, three different models of IGM absorption, and two cosmologi...

  13. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  14. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. Long-term running alleviates some behavioral and molecular abnormalities in Down syndrome mouse model Ts65Dn.

    Science.gov (United States)

    Kida, Elizabeth; Rabe, Ausma; Walus, Marius; Albertini, Giorgio; Golabek, Adam A

    2013-02-01

    Running may affect the mood, behavior and neurochemistry of running animals. In the present study, we investigated whether voluntary daily running, sustained over several months, might improve cognition and motor function and modify the brain levels of selected proteins (SOD1, DYRK1A, MAP2, APP and synaptophysin) in Ts65Dn mice, a mouse model for Down syndrome (DS). Ts65Dn and age-matched wild-type mice, all females, had free access to a running wheel either from the time of weaning (post-weaning cohort) or from around 7 months of age (adult cohort). Sedentary female mice were housed in similar cages, without running wheels. Behavioral testing and evaluation of motor performance showed that running improved cognitive function and motor skills in Ts65Dn mice. However, while a dramatic improvement in the locomotor functions and learning of motor skills was observed in Ts65Dn mice from both post-weaning and adult cohorts, improved object memory was seen only in Ts65Dn mice that had free access to the wheel from weaning. The total levels of APP and MAP2ab were reduced and the levels of SOD1 were increased in the runners from the post-weaning cohort, while only the levels of MAP2ab and α-cleaved C-terminal fragments of APP were reduced in the adult group in comparison with sedentary trisomic mice. Hence, our study demonstrates that Ts65Dn females benefit from sustained voluntary physical exercise, more prominently if running starts early in life, providing further support to the idea that a properly designed physical exercise program could be a valuable adjuvant to future pharmacotherapy for DS.

  17. A simple running model with rolling contact and its role as a template for dynamic locomotion on a hexapod robot.

    Science.gov (United States)

    Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun

    2014-10-07

    We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.

  18. Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Daniel [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). et al.

    2015-07-06

    One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.

  19. An integrated model to assess critical rain fall thresholds for the critical run-out distances of debris flows

    NARCIS (Netherlands)

    van Asch, Th.W.J.; Tang, C.; Alkema, D.; Zhu, J.; Zhou, W.

    2013-01-01

    A dramatic increase in debris flows occurred in the years after the 2008 Wenchuan earthquake in SW China due to the deposition of loose co-seismic landslide material. This paper proposes a preliminary integrated model, which describes the relationship between rain input and debris flow run-out in or

  20. Convergent Validity of the One-Mile Run and PACER VO2MAX Prediction Models in Middle School Students

    Directory of Open Access Journals (Sweden)

    Ryan D. Burns

    2014-02-01

    Full Text Available FITNESSGRAM uses an equating method to convert Progressive Aerobic Cardiovascular Endurance Run (PACER laps to One-mile run/walk (1MRW times to estimate aerobic fitness (VO2MAX in children. However, other prediction models can more directly estimate VO2MAX from PACER performance. The purpose of this study was to examine the convergent validity and relative accuracy between 1MRW and various PACER models for predicting VO2MAX in middle school students. Aerobic fitness was assessed on 134 students utilizing the 1MRW and PACER on separate testing days. Pearson correlations, Bland–Altman plots, kappa statistics, proportion of agreement, and prediction error were used to assess associations and agreement among models. Correlation coefficients were strong (r ≥ .80, p .40 and agreement > .90. The results support that PACER models contain convergent validity and strong relative accuracy with the 1MRW model.

  1. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    Science.gov (United States)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  2. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    Energy Technology Data Exchange (ETDEWEB)

    Norman, Matthew [Univ. of California, San Diego, CA (United States)

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  3. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  4. Nonsmooth Modeling and Simulation for Switched Circuits

    CERN Document Server

    Acary, Vincent; Brogliato, Bernard

    2011-01-01

    "Nonsmooth Modeling and Simulation for Switched Circuits" concerns the modeling and the numerical simulation of switched circuits with the nonsmooth dynamical systems (NSDS) approach, using piecewise-linear and multivalued models of electronic devices like diodes, transistors, switches. Numerous examples (ranging from introductory academic circuits to various types of power converters) are analyzed and many simulation results obtained with the INRIA open-source SICONOS software package are presented. Comparisons with SPICE and hybrid methods demonstrate the power of the NSDS approach

  5. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  6. Juno model rheometry and simulation

    Science.gov (United States)

    Sampl, Manfred; Macher, Wolfgang; Oswald, Thomas; Plettemeier, Dirk; Rucker, Helmut O.; Kurth, William S.

    2016-10-01

    The experiment Waves aboard the Juno spacecraft, which will arrive at its target planet Jupiter in 2016, was devised to study the plasma and radio waves of the Jovian magnetosphere. We analyzed the Waves antennas, which consist of two nonparallel monopoles operated as a dipole. For this investigation we applied two independent methods: the experimental technique, rheometry, which is based on a downscaled model of the spacecraft to measure the antenna properties in an electrolytic tank and numerical simulations, based on commercial computer codes, from which the quantities of interest (antenna impedances and effective length vectors) are calculated. In this article we focus on the results for the low-frequency range up to about 4 MHz, where the antenna system is in the quasi-static regime. Our findings show that there is a significant deviation of the effective length vectors from the physical monopole directions, caused by the presence of the conducting spacecraft body. The effective axes of the antenna monopoles are offset from the mechanical axes by more than 30°, and effective lengths show a reduction to about 60% of the antenna rod lengths. The antennas' mutual capacitances are small compared to the self-capacitances, and the latter are almost the same for the two monopoles. The overall performance of the antennas in dipole configuration is very stable throughout the frequency range up to about 4-5 MHz and therefore can be regarded as the upper frequency bound below which the presented quasi-static results are applicable.

  7. On the duality between long-run relations and common trends in the I(1) versus I(2) model

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1994-01-01

    Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test p......, "excess money" is estimated and its effect on the other determinants of the system is investigated. In particular, it is found that "excess money" has no effect on price inflation...... procedures reveal that nominal money stock is essentially I(2). Long-run price homogeneity is supported by the data and imposed on the system. It is found that the bond rate is weakly exogenous for the long-run parameters and therefore act as a driving trend. Using the nonstationarity property of the data...

  8. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  9. Notes on 'Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis'

    NARCIS (Netherlands)

    van Valkenhoef, Gert; Tervonen, Tommi; Postmus, Douwe

    2014-01-01

    In our previous work published in this journal, we showed how the Hit-And-Run (HAR) procedure enables efficient sampling of criteria weights from a space formed by restricting a simplex with arbitrary linear inequality constraints. In this short communication, we note that the method for generating

  10. Notes on 'Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis'

    NARCIS (Netherlands)

    van Valkenhoef, Gert; Tervonen, Tommi; Postmus, Douwe

    2014-01-01

    In our previous work published in this journal, we showed how the Hit-And-Run (HAR) procedure enables efficient sampling of criteria weights from a space formed by restricting a simplex with arbitrary linear inequality constraints. In this short communication, we note that the method for generating

  11. Wave Run-Up on Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez

    This study has investigated the interaction of water waves with a circular structure known as wave run-up phenomenon. This run-up phenomenon has been simulated by the use of computational fluid dynamic models. The numerical model (NS3) used in this study has been verified rigorously against...... to get a better understanding of the phenomenon. According to the results from this analysis it has been established that the run-up heights are largely influenced by the deep water wave steepness. Overall, the outcome of this research is that the simplified model presented in this thesis of the wave run...

  12. Evaluation of land surface model representation of phenology: an analysis of model runs submitted to the NACP Interim Site Synthesis

    Science.gov (United States)

    Richardson, A. D.; Nacp Interim Site Synthesis Participants

    2010-12-01

    Phenology represents a critical intersection point between organisms and their growth environment. It is for this reason that phenology is a sensitive and robust integrator of the biological impacts of year-to-year climate variability and longer-term climate change on natural systems. However, it is perhaps equally important that phenology, by controlling the seasonal activity of vegetation on the land surface, plays a fundamental role in regulating ecosystem processes, competitive interactions, and feedbacks to the climate system. Unfortunately, the phenological sub-models implemented in most state-of-the-art ecosystem models and land surface schemes are overly simplified. We quantified model errors in the representation of the seasonal cycles of leaf area index (LAI), gross ecosystem photosynthesis (GEP), and net ecosystem exchange of CO2. Our analysis was based on site-level model runs (14 different models) submitted to the North American Carbon Program (NACP) Interim Synthesis, and long-term measurements from 10 forested (5 evergreen conifer, 5 deciduous broadleaf) sites within the AmeriFlux and Fluxnet-Canada networks. Model predictions of the seasonality of LAI and GEP were unacceptable, particularly in spring, and especially for deciduous forests. This is despite an historical emphasis on deciduous forest phenology, and the perception that controls on spring phenology are better understood than autumn phenology. Errors of up to 25 days in predicting “spring onset” transition dates were common, and errors of up to 50 days were observed. For deciduous sites, virtually every model was biased towards spring onset being too early, and autumn senescence being too late. Thus, models predicted growing seasons that were far too long for deciduous forests. For most models, errors in the seasonal representation of deciduous forest LAI were highly correlated with errors in the seasonality of both GPP and NEE, indicating the importance of getting the underlying

  13. Method of Running Sines: Modeling Variability in Long-Period Variables

    CERN Document Server

    Andronov, Ivan L

    2013-01-01

    We review one of complementary methods for time series analysis - the method of "Running Sines". "Crash tests" of the method include signals with a large period variation and with a large trend. The method is most effective for "nearly periodic" signals, which exhibit "wavy shape" with a "cycle length" varying within few dozen per cent (i.e. oscillations of low coherence). This is a typical case for brightness variations of long-period pulsating variables and resembles QPO (Quasi-Periodic Oscillations) and TPO (Transient Periodic Oscillations) in interacting binary stars - cataclysmic variables, symbiotic variables, low-mass X-Ray binaries etc. General theory of "running approximations" was described by Andronov (1997A &AS..125..207A), one of realizations of which is the method of "running sines". The method is related to Morlet-type wavelet analysis improved for irregularly spaced data (Andronov, 1998KFNT...14..490A, 1999sss..conf...57A), as well as to a classical "running mean" (="moving average"). The ...

  14. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    Science.gov (United States)

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  15. Real-time model for simulating a tracked vehicle on deformable soils

    Directory of Open Access Journals (Sweden)

    Martin Meywerk

    2016-05-01

    Full Text Available Simulation is one possibility to gain insight into the behaviour of tracked vehicles on deformable soils. A lot of publications are known on this topic, but most of the simulations described there cannot be run in real-time. The ability to run a simulation in real-time is necessary for driving simulators. This article describes an approach for real-time simulation of a tracked vehicle on deformable soils. The components of the real-time model are as follows: a conventional wheeled vehicle simulated in the Multi Body System software TRUCKSim, a geometric description of landscape, a track model and an interaction model between track and deformable soils based on Bekker theory and Janosi–Hanamoto, on one hand, and between track and vehicle wheels, on the other hand. Landscape, track model, soil model and the interaction are implemented in MATLAB/Simulink. The details of the real-time model are described in this article, and a detailed description of the Multi Body System part is omitted. Simulations with the real-time model are compared to measurements and to a detailed Multi Body System–finite element method model of a tracked vehicle. An application of the real-time model in a driving simulator is presented, in which 13 drivers assess the comfort of a passive and an active suspension of a tracked vehicle.

  16. VHDL simulation with access to transistor models

    Science.gov (United States)

    Gibson, J.

    1991-01-01

    Hardware description languages such as VHDL have evolved to aid in the design of systems with large numbers of elements and a wide range of electronic and logical abstractions. For high performance circuits, behavioral models may not be able to efficiently include enough detail to give designers confidence in a simulation's accuracy. One option is to provide a link between the VHDL environment and a transistor level simulation environment. The coupling of the Vantage Analysis Systems VHDL simulator and the NOVA simulator provides the combination of VHDL modeling and transistor modeling.

  17. Quantum game simulator, using the circuit model of quantum computation

    Science.gov (United States)

    Vlachos, Panagiotis; Karafyllidis, Ioannis G.

    2009-10-01

    We present a general two-player quantum game simulator that can simulate any two-player quantum game described by a 2×2 payoff matrix (two strategy games).The user can determine the payoff matrices for both players, their strategies and the amount of entanglement between their initial strategies. The outputs of the simulator are the expected payoffs of each player as a function of the other player's strategy parameters and the amount of entanglement. The simulator also produces contour plots that divide the strategy spaces of the game in regions in which players can get larger payoffs if they choose to use a quantum strategy against any classical one. We also apply the simulator to two well-known quantum games, the Battle of Sexes and the Chicken game. Program summaryProgram title: Quantum Game Simulator (QGS) Catalogue identifier: AEED_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEED_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3416 No. of bytes in distributed program, including test data, etc.: 583 553 Distribution format: tar.gz Programming language: Matlab R2008a (C) Computer: Any computer that can sufficiently run Matlab R2008a Operating system: Any system that can sufficiently run Matlab R2008a Classification: 4.15 Nature of problem: Simulation of two player quantum games described by a payoff matrix. Solution method: The program calculates the matrices that comprise the Eisert setup for quantum games based on the quantum circuit model. There are 5 parameters that can be altered. We define 3 of them as constant. We play the quantum game for all possible values for the other 2 parameters and store the results in a matrix. Unusual features: The software provides an easy way of simulating any two-player quantum games. Running time: Approximately

  18. Modeling and Simulation of Low Voltage Arcs

    NARCIS (Netherlands)

    Ghezzi, L.; Balestrero, A.

    2010-01-01

    Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical

  19. Modeling and Simulation of Low Voltage Arcs

    NARCIS (Netherlands)

    Ghezzi, L.; Balestrero, A.

    2010-01-01

    Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical m

  20. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single mat...

  1. Simulations

    CERN Document Server

    Ngada, N M

    2015-01-01

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  2. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  3. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  4. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  5. Simulation model of metallurgical production management

    Directory of Open Access Journals (Sweden)

    P. Šnapka

    2013-07-01

    Full Text Available This article is focused to the problems of the metallurgical production process intensification. The aim is the explaining of simulation model which presents metallurgical production management system adequated to new requirements. The knowledge of a dynamic behavior and features of metallurgical production system and its management are needed to this model creation. Characteristics which determine the dynamics of metallurgical production process are characterized. Simulation model is structured as functional blocks and their linkages with regard to organizational and temporal hierarchy of their actions. The creation of presented simulation model is based on theoretical findings of regulation, hierarchical systems and optimization.

  6. Running and addiction: precipitated withdrawal in a rat model of activity-based anorexia

    OpenAIRE

    Kanarek, Robin B.; D'Anci, Kristen E.; Jurdak, Nicole; Mathes, Wendy Foulds

    2009-01-01

    Physical activity improves cardiovascular health, strengthens muscles and bones, stimulates neuroplasticity, and promotes feelings of well-being and self-esteem. However, when taken to extremes, exercise can develop into an addictive-like behavior. To further assess the addictive potential of physical activity, the present experiments assessed whether running wheel activity in rats would lead to physical dependence similar to that observed after chronic morphine administration. Active male an...

  7. MathRun: An Adaptive Mental Arithmetic Game Using A Quantitative Performance Model

    OpenAIRE

    Chen, L.; Tang, Wen

    2016-01-01

    Pedagogy and the way children learn are changing rapidly with the introduction of widely accessible computer technologies, from mobile apps to interactive educational games. Digital games have the capacity to embed many learning supports using the widely accredited VARK (visual, auditory, reading, and kinaesthetic) learning style. In this paper, we present a mathematics educational game MathRun for children age between 7-11 years old to practice mental arithmetic. We build the game as an inte...

  8. Treadmill running improves spatial memory in an animal model of Alzheimer's disease.

    Science.gov (United States)

    Hoveida, Reihaneh; Alaei, Hojjatallah; Oryan, Shahrbanoo; Parivar, Kazem; Reisi, Parham

    2011-01-01

    Alzheimer's disease (AD) is a progressive neurodegenerative disease that is characterized by a decline in cognitive function and severe neuronal loss in the cerebral cortex and certain subcortical regions of the brain including nucleus basalis magnocellularis (NBM) that play an important role in learning and memory. There are few therapeutic regimens that influence the underlying pathogenic phenotypes of AD, however, of the currently available therapies, exercise training is considered to be one of the best strategies for attenuating the pathological phenotypes of AD for people with AD. Here, we sought to investigate the effect of treadmill running on spatial memory in Alzheimer-induced rats. Male Wistar rats were split into two groups namely shams (n=7) and lesions with the lesion group subdivided further into the lesion-rest (n=7) and lesion-exercise (n=7). The lesion-exercise and shams were subjected to treadmill running at 17 meters per minute (m/min) for 60 min per day (min/day), 7 days per week (days/wk), for 60 days. Spatial memory was investigated using the Morris Water Maze test in the rats after 60 days of Alzheimer induction and the exercise. Our data demonstrated that spatial memory was indeed impaired in the lesion group compared with the shams. However, exercise notably improved spatial memory in the lesion-exercised rats compared to lesion-rested group. The present results suggest that spatial memory is affected under Alzheimer conditions and that treadmill running improves these effects. Our data suggested that treadmill running contributes to the alleviation of the cognitive decline in AD.

  9. Voluntary Running Attenuates Memory Loss, Decreases Neuropathological Changes and Induces Neurogenesis in a Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Tapia-Rojas, Cheril; Aranguiz, Florencia; Varela-Nallar, Lorena; Inestrosa, Nibaldo C

    2016-01-01

    Alzheimer's disease (AD) is a neurodegenerative disorder characterized by loss of memory and cognitive abilities, and the appearance of amyloid plaques composed of the amyloid-β peptide (Aβ) and neurofibrillary tangles formed of tau protein. It has been suggested that exercise might ameliorate the disease; here, we evaluated the effect of voluntary running on several aspects of AD including amyloid deposition, tau phosphorylation, inflammatory reaction, neurogenesis and spatial memory in the double transgenic APPswe/PS1ΔE9 mouse model of AD. We report that voluntary wheel running for 10 weeks decreased Aβ burden, Thioflavin-S-positive plaques and Aβ oligomers in the hippocampus. In addition, runner APPswe/PS1ΔE9 mice showed fewer phosphorylated tau protein and decreased astrogliosis evidenced by lower staining of GFAP. Further, runner APPswe/PS1ΔE9 mice showed increased number of neurons in the hippocampus and exhibited increased cell proliferation and generation of cells positive for the immature neuronal protein doublecortin, indicating that running increased neurogenesis. Finally, runner APPswe/PS1ΔE9 mice showed improved spatial memory performance in the Morris water maze. Altogether, our findings indicate that in APPswe/PS1ΔE9 mice, voluntary running reduced all the neuropathological hallmarks of AD studied, reduced neuronal loss, increased hippocampal neurogenesis and reduced spatial memory loss. These findings support that voluntary exercise might have therapeutic value on AD.

  10. Running Exercise Alleviates Pain and Promotes Cell Proliferation in a Rat Model of Intervertebral Disc Degeneration

    Directory of Open Access Journals (Sweden)

    Shuo Luan

    2015-01-01

    Full Text Available Chronic low back pain accompanied by intervertebral disk degeneration is a common musculoskeletal disorder. Physical exercise, which is clinically recommended by international guidelines, has proven to be effective for degenerative disc disease (DDD patients. However, the mechanism underlying the analgesic effects of physical exercise on DDD remains largely unclear. The results of the present study showed that mechanical withdrawal thresholds of bilateral hindpaw were significantly decreased beginning on day three after intradiscal complete Freund’s adjuvant (CFA injection and daily running exercise remarkably reduced allodynia in the CFA exercise group beginning at day 28 compared to the spontaneous recovery group (controls. The hindpaw withdrawal thresholds of the exercise group returned nearly to baseline at the end of experiment, but severe pain persisted in the control group. Histological examinations performed on day 70 revealed that running exercise restored the degenerative discs and increased the cell densities of the annulus fibrosus (AF and nucleus pulposus (NP. Furthermore, immunofluorescence labeling revealed significantly higher numbers of 5-bromo-2-deoxyuridine (BrdU-positive cells in the exercise group on days 28, 42, 56 and 70, which indicated more rapid proliferation compared to the control at the corresponding time points. Taken together, these results suggest that running exercise might alleviate the mechanical allodynia induced by intradiscal CFA injection via disc repair and cell proliferation, which provides new evidence for future clinical use.

  11. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  12. Modeling and simulation of dust behaviors behind a moving vehicle

    Science.gov (United States)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust

  13. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  14. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  15. Warehouse Simulation Through Model Configuration

    NARCIS (Netherlands)

    Verriet, J.H.; Hamberg, R.; Caarls, J.; Wijngaarden, B. van

    2013-01-01

    The pre-build development of warehouse systems leads from a specific customer request to a specific customer quotation. This involves a process of configuring a warehouse system using a sequence of steps that contain increasingly more details. Simulation is a helpful tool in analyzing warehouse desi

  16. Improving vacuum gas oil hydrotreating operation via a lumped parameter dynamic simulation modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Remesat, D.

    2008-07-01

    Although hydrotreating has become a large part of refining operations for sour crudes, refiners rarely achieve their run lengths and crude throughput objectives for vacuum gas oil (VGO) hydrotreaters. This shortfall in performance can be attributed to crude flow changes, feed compositional changes, sulphur and metals changes, or hydrogen partial pressure changes, all of which reduce the effectiveness of the catalysts that remove sulphur from the crude oil streams. Although some proprietary steady state models exist to indicate performance enhancement during operation, they have not been widely used and it is not certain whether they would be effective in simulating the process with disturbances over the run length of the process. This study used publicly unattainable data gathered from 14 operating hydrotreaters and developed a lumped parameter dynamic model, using both Excel and HYSYS software, for industrial refinery/upgrader VGO hydrotreaters. The model takes proprietary and public steady state hydrotreater models and successfully applies it to a commercial dynamic simulation package. The model tracks changes in intrinsic reaction rate based on catalyst deactivation, wetting efficiency, feed properties and operating conditions to determine operating temperature, outlet sulphur composition and chemical hydrogen consumed. The model simulates local disturbances, and represents the start, middle and end operating zones during hydrotreater run length. This correlative, partially predictive model demonstrates the economic benefits of increasing hydrogen to improve the operation of a hydrotreater by increasing run length and/or improving crude processing.

  17. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  18. Quantum simulation of the t- J model

    Science.gov (United States)

    Yamaguchi, Fumiko; Yamamoto, Yoshihisa

    2002-12-01

    Computer simulation of a many-particle quantum system is bound to reach the inevitable limits of its ability as the system size increases. The primary reason for this is that the memory size used in a classical simulator grows polynomially whereas the Hilbert space of the quantum system does so exponentially. Replacing the classical simulator by a quantum simulator would be an effective method of surmounting this obstacle. The prevailing techniques for simulating quantum systems on a quantum computer have been developed for purposes of computing numerical algorithms designed to obtain approximate physical quantities of interest. The method suggested here requires no numerical algorithms; it is a direct isomorphic translation between a quantum simulator and the quantum system to be simulated. In the quantum simulator, physical parameters of the system, which are the fixed parameters of the simulated quantum system, are under the control of the experimenter. A method of simulating a model for high-temperature superconducting oxides, the t- J model, by optical control, as an example of such a quantum simulation, is presented.

  19. CAMBIO: software for modelling and simulation of bioprocesses.

    Science.gov (United States)

    Farza, M; Chéruy, A

    1991-07-01

    CAMBIO, a software package devoted to bioprocess modelling, which runs on Apollo computers, is described. This software enables bioengineers to easily and interactively design appropriate mathematical models directly from their perception of the process. CAMBIO provides the user with a set of design symbols and mnemonic icons in order to interactively design a functional diagram. This diagram has to exhibit the most relevant components with their related interactions through biological and physico-chemical reactions. Then, CAMBIO automatically generates the dynamical material balance equations of the process in the form of an algebraic-differential system by taking advantage of the knowledge involved in the functional diagram. The model may be used for control design purpose or completed by kinetics expressions with a view to simulation. CAMBIO offers facilities to generate a simulation model (for coding of kinetics, introducing auxiliary variables, etc.). This model is automatically interfaced with a specialized simulation software which allows an immediate visualization of the process dynamical behaviour under various operational conditions (possibly involving feedback control strategies). An example of an application dealing with yeast fermentation is given.

  20. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  1. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    OpenAIRE

    Abercrombie, Daniel; Akchurin, Nural; Akilli, Ece; Maestre, Juan Alcaraz; Allen, Brandon; Gonzalez, Barbara Alvarez; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backović, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report als...

  2. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...... in details. The results of simulations developed for different researches reveal that different mdel may be suitable for different purpose, thus the model should be chosen different carefully. Some details and tricks in modeling are also introduced which give a reference for further research....

  3. Simulation-based Manufacturing System Modeling

    Institute of Scientific and Technical Information of China (English)

    卫东; 金烨; 范秀敏; 严隽琪

    2003-01-01

    In recent years, computer simulation appears to be very advantageous technique for researching the resource-constrained manufacturing system. This paper presents an object-oriented simulation modeling method, which combines the merits of traditional methods such as IDEF0 and Petri Net. In this paper, a four-layer-one-angel hierarchical modeling framework based on OOP is defined. And the modeling description of these layers is expounded, such as: hybrid production control modeling and human resource dispatch modeling. To validate the modeling method, a case study of an auto-product line in a motor manufacturing company has been carried out.

  4. Oil shale project run summary for small retort Run S-10

    Energy Technology Data Exchange (ETDEWEB)

    Ackerman, F.J.; Sandholtz, W.A.; Raley, J.H.; Laswell, B.H. (eds.)

    1978-06-01

    A combustion run using sidewall heaters to control heat loss and computer control to set heater power were conducted to study the effectiveness of the heater control system, compare results with a one-dimensional retort model when radial heat loss is not significant, and determine effects of recycling off-gas to the retort (by comparison with future runs). It is concluded that adequate simulation of in-situ processing in laboratory retorts requires control of heat losses. (JRD)

  5. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  6. TransCom model simulations of hourly atmospheric CO2: Experimental overview and diurnal cycle results for 2002

    NARCIS (Netherlands)

    Law, R. M.; Peters, W.; RöDenbeck, C.; Aulagnier, C.; Baker, I.; Bergmann, D. J.; Bousquet, P.; Brandt, J.; Bruhwiler, L.; Cameron-Smith, P. J.; Christensen, J. H.; Delage, F.; Denning, A. S.; Fan, S.; Geels, C.; Houweling, S.; Imasu, R.; Karstens, U.; Kawa, S. R.; Kleist, J.; Krol, M. C.; Lin, S.-J.; Lokupitiya, R.; Maki, T.; Maksyutov, S.; Niwa, Y.; Onishi, R.; Parazoo, N.; Patra, P. K.; Pieterse, G.; Rivier, L.; Satoh, M.; Serrar, S.; Taguchi, S.; Takigawa, M.; Vautard, R.; Vermeulen, A. T.; Zhu, Z.

    2008-01-01

    A forward atmospheric transport modeling experiment has been coordinated by the TransCom group to investigate synoptic and diurnal variations in CO2. Model simulations were run for biospheric, fossil, and air-sea exchange of CO2 and for SF6 and radon for 2000-2003. Twenty-five models or model varian

  7. Climate simulations for 1880-2003 with GISS modelE

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ. Earth Inst., New York, NY (United States); Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Lerner, J.; Perlwitz, J.; Unger, N.; Zhang, S. [Columbia Univ. Earth Inst., New York, NY (United States); Ruedy, R.; Lo, K.; Cheng, Y.; Oinas, V.; Schmunk, R.; Tausnev, N.; Yao, M. [Sigma Space Partners LLC, New York, NY (United States); Lacis, A.; Schmidt, G.A.; Del Genio, A.; Rind, D.; Romanou, A.; Shindell, D. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ., Dept. of Earth and Environmental Sciences, New York, NY (United States); Miller, R.; Hall, T. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ., Dept. of Applied Physics and Applied Mathematics, New York, NY (United States); Russell, G.; Canuto, V.; Kiang, N.Y. [NASA Goddard Inst. for Space Studies, New York, NY (United States); Baum, E.; Cohen, A. [Clean Air Task Force, Boston, MA (United States); Cairns, B.; Perlwitz, J. [Columbia Univ., Dept. of Applied Physics and Applied Mathematics, New York, NY (United States); Fleming, E.; Jackman, C.; Labow, G. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Friend, A.; Kelley, M. [Lab. des Sciences du Climat et de l' Environnement, Gif-sur-Yvette (France); Koch, D. [Columbia Univ. Earth Inst., New York, NY (United States)]|[Yale Univ., Dept. of Geology, New Haven, CT (United States); Menon, S.; Novakov, T. [Lawrence Berkeley National Lab., CA (United States); Stone, P. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Sun, S. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Massachusetts Inst. of Tech., Cambridge, MA (United States); Streets, D. [Argonne National Lab., IL (United States); Thresher, D. [Columbia Univ., Dept. of Earth and Environmental Sciences, New York, NY (United States)

    2007-12-15

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies include unrealistically weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcings are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (orig.)

  8. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  9. System-level modeling and simulation of the cell culture microfluidic biochip ProCell

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2010-01-01

    -defined micro-channels using valves and pumps. We present an approach to the system-level modeling and simulation of a cell culture microfluidic biochip called ProCell, Programmable Cell Culture Chip. ProCell contains a cell culture chamber, which is envisioned to run 256 simultaneous experiments (viewed...

  10. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  11. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  12. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  13. HVDC System Characteristics and Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)

    2001-07-01

    This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.

  14. Physical modeling of long-wave run-up mitigation using submerged breakwaters

    Science.gov (United States)

    Lee, Yu-Ting; Wu, Yun-Ta; Hwung, Hwung-Hweng; Yang, Ray-Yeng

    2016-04-01

    Natural hazard due to tsunami inundation inland has been viewed as a crucial issue for coastal engineering community. The 2004 India Ocean tsunami and the 2011 Tohoku earthquake tsunami were caused by mega scale earthquakes that brought tremendous catastrophe in the disaster regions. It is thus of great importance to develop innovative approach to achieve the reduction and mitigation of tsunami hazards. In this study, new experiments have been carried out in a laboratory-scale to investigate the physical process of long-wave through submerged breakwaters built upon a mild slope. Solitary-wave is employed to represent the characteristic of long-wave with infinite wavelength and wave period. Our goal is twofold. First of all, through changing the positions of single breakwater and multiple breakwaters upon a mild slope, the optimal locations of breakwaters can be pointed out by means of maximum run-up reduction. Secondly, through using a state-of-the-art measuring technique Bubble Image Velocimetry, which features non-intrusive and image-based measurement, the wave kinematics in the highly aerated region due to solitary-wave shoaling, breaking and uprush can be quantitated. Therefore, the mitigation of long-wave due to the construction of submerged breakwaters built upon a mild slope can be evaluated not only for imaging run-up and run-down characteristics but also for measuring turbulent velocity fields due to breaking wave. Although we understand the most devastating tsunami hazards cannot be fully mitigated with impossibility, this study is to provide quantitated information on what kind of artificial coastal structure that can withstand which level of wave loads.

  15. Classical running and symmetry breaking in models with two extra dimensions

    CERN Document Server

    Papineau, C

    2007-01-01

    We consider a codimension two scalar theory with brane-localised Higgs type potential. The six-dimensional field has Dirichlet boundary condition on the bounds of the transverse compact space. The regularisation of the brane singularity yields renormalisation group evolution for the localised couplings at the classical level. In particular, a tachyonic mass term grows at large distances and hits a Landau pole. We exhibit a peculiar value of the bare coupling such that the running mass parameter becomes large precisely at the compactification scale, and the effective four-dimensional zero mode is massless. Above the critical coupling, spontaneous symmetry breaking occurs and there is a very light state.

  16. Up and running with AutoCAD 2014 2D and 3D drawing and modeling

    CERN Document Server

    Gindis, Elliot

    2013-01-01

    Get ""Up and Running"" with AutoCAD using Gindis's combination of step-by-step instruction, examples, and insightful explanations. The emphasis from the beginning is on core concepts and practical application of AutoCAD in architecture, engineering and design. Equally useful in instructor-led classroom training, self-study, or as a professional reference, the book is written with the user in mind by a long-time AutoCAD professional and instructor based on what works in the industry and the classroom. Strips away complexities, both real and perceived, and reduces AutoCAD t

  17. Simulation modeling and analysis with Arena

    Energy Technology Data Exchange (ETDEWEB)

    Tayfur Altiok; Benjamin Melamed [Rutgers University, NJ (United States). Department of Industrial and Systems Engineering

    2007-06-15

    The textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings. Chapter 13.3.3 is on coal loading operations on barges/tugboats.

  18. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  19. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  20. Integrating Geo-Spatial Data for Regional Landslide Susceptibility Modeling in Consideration of Run-Out Signature

    Science.gov (United States)

    Lai, J.-S.; Tsai, F.; Chiang, S.-H.

    2016-06-01

    This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.

  1. A Modeling Method of Agent Based on Milk-run in Automobile Parts%基于Agent的汽车零部件循环取货模型

    Institute of Scientific and Technical Information of China (English)

    屈新怀; 盛敏; 丁必荣

    2013-01-01

    Milk-Run,as a new method in supply system management in automobile pasts inbound logistic,can be considered as a kind of complex adaptive system.It is composed of suppliers,3PL,and automobile firm.According to its conceptual model,the agent-based model method has been used.After define the research purpose,the abstract lever of these agents focus on the corporate sector in the milk-run system,such as the product Agent,purchase Agent,schedule Agent etc.First to analysis the internal model of all agents,then to adopt the formalization description method to describe the agent behaviors.At last interactive processing between these agents are been explained in Agent UML.Apparently,the agent-based modeling method has a strong performer on the principle of milk-run system.It will be easy to achieve the simulation about the milk-run based on the agent model.%汽车零部件的循环取货模式作为一种新型物料供应体系,是由供应商、3PL、主机厂多个主体组成,属于复杂适应性系统.根据循环取货的概念模型,采用多Agent建模理论对循环取货进行建模.将目标系统的Agent粒度抽象为企业级以下的职能部门,设计生产Agent、采购Agent、调度Agent等几类主体.分析Agent内部模型,并采用形式化方法对主体行为进行描述,应用Agent UML分析主体之间的动态交互行为.基于Agent的建模描述了循环取货的运行机制,从而为后续的计算机仿真实现提供基础.

  2. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...... onduction simulation experiments....

  3. Modeling and simulation for RF system design

    CERN Document Server

    Frevert, Ronny; Jancke, Roland; Knöchel, Uwe; Schwarz, Peter; Kakerow, Ralf; Darianian, Mohsen

    2005-01-01

    Focusing on RF specific modeling and simulation methods, and system and circuit level descriptions, this work contains application-oriented training material. Accompanied by a CD- ROM, it combines the presentation of a mixed-signal design flow, an introduction into VHDL-AMS and Verilog-A, and the application of commercially available simulators.

  4. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  5. The Tourism Market of Australia – A Model of Managerial Performance in Running an Exotic Tourist Destination

    Directory of Open Access Journals (Sweden)

    Mihai Daniela

    2012-12-01

    Full Text Available The purpose of this paper is to illustrate the performance management that government decision-making bodies involve in organizing tourism in Australia. The proposed quantitative indicators evaluate the managerial performance in running this system: macroeconomic indicators of domestic and international tourist flows and their impact on the Australian economy. The conclusion is that the national tourism development strategy adopted in Australia, through its objectives and identified strategic options, offers the potential to enhance the competitiveness of the tourism industry. The interim results of its implementation demonstrate its effectiveness: in Australia, tourism has become the real driver of socioeconomic progress, thus a model of performance management in running a potentially valuable tourist destinations.

  6. A sand wave simulation model

    NARCIS (Netherlands)

    Nemeth, A.A.; Hulscher, S.J.M.H.; Damme, van R.M.J.

    2003-01-01

    Sand waves form a prominent regular pattern in the offshore seabeds of sandy shallow seas. A two dimensional vertical (2DV) flow and morphological numerical model describing the behaviour of these sand waves has been developed. The model contains the 2DV shallow water equations, with a free water su

  7. Modeling Fall Run Chinook Salmon Populations in the San Joaquin River Basin Using an Artificial Neural Network

    Science.gov (United States)

    Keyantash, J.; Quinn, N. W.; Hidalgo, H. G.; Dracup, J. A.

    2002-12-01

    The number of chinook salmon returning to spawn during the fall run (September-November) were separately modeled for three San Joaquin River tributaries-the Stanislaus, Tuolumne, and Merced Rivers-to determine the sensitivity of salmon populations to hydrologic alterations associated with potential climate change. The modeling was accomplished using a feed-forward artificial neural network (ANN) with error backpropagation. Inputs to the ANN included modeled monthly river temperature and streamflow data for each tributary, and were lagged multiple years to include the effects of antecedent environmental conditions upon populations of salmon throughout their life histories. Temperature and streamflow conditions at downstream locations in each tributary were computed using the California Dept. of Water Resources' DSM-2 model. Inputs to the DSM-2 model originated from regional climate modeling under a CO2 doubling scenario. Annual population data for adult chinook salmon (1951-present) were provided by the California Dept. of Fish and Game, and were used for supervised training of the ANN. It was determined that Stanislaus, Tuolumne and Merced River chinook runs could be impacted by alterations to the hydroclimatology of the San Joaquin basin.

  8. Modelling Reactive and Proactive Behaviour in Simulation

    CERN Document Server

    Majid, Mazlina Abdul; Aickelin, Uwe

    2010-01-01

    This research investigated the simulation model behaviour of a traditional and combined discrete event as well as agent based simulation models when modelling human reactive and proactive behaviour in human centric complex systems. A departmental store was chosen as human centric complex case study where the operation system of a fitting room in WomensWear department was investigated. We have looked at ways to determine the efficiency of new management policies for the fitting room operation through simulating the reactive and proactive behaviour of staff towards customers. Once development of the simulation models and their verification had been done, we carried out a validation experiment in the form of a sensitivity analysis. Subsequently, we executed a statistical analysis where the mixed reactive and proactive behaviour experimental results were compared with some reactive experimental results from previously published works. Generally, this case study discovered that simple proactive individual behaviou...

  9. Challenges in SysML Model Simulation

    Directory of Open Access Journals (Sweden)

    Mara Nikolaidou

    2016-07-01

    Full Text Available Systems Modeling Language (SysML is a standard proposed by the OMG for systems-of-systems (SoS modeling and engineering. To this end, it provides the means to depict SoS components and their behavior in a hierarchical, multi-layer fashion, facilitating alternative engineering activities, such as system design. To explore the performance of SysML, simulation is one of the preferred methods. There are many efforts targeting simulation code generation from SysML models. Numerous simulation methodologies and tools are employed, while different SysML diagrams are utilized. Nevertheless, this process is not standardized, although most of current approaches tend to follow the same steps, even if they employ different tools. The scope of this paper is to provide a comprehensive understanding of the similarities and differences of existing approaches and identify current challenges in fully automating SysML models simulation process.

  10. SIMULATION MODELING SLOW SPATIALLY HETER- OGENEOUS COAGULATION

    Directory of Open Access Journals (Sweden)

    P. A. Zdorovtsev

    2013-01-01

    Full Text Available A new model of spatially inhomogeneous coagulation, i.e. formation of larger clusters by joint interaction of smaller ones, is under study. The results of simulation are compared with known analytical and numerical solutions.

  11. Spectral Running and Non-Gaussianity from Slow-Roll Inflation in Generalised Two--Field Models

    CERN Document Server

    Choi, Ki-Young; van de Bruck, Carsten

    2008-01-01

    Theories beyond the standard model such as string theory motivate low energy effective field theories with several scalar fields which are not only coupled through a potential but also through their kinetic terms. For such theories we derive the general formulae for the running of the spectral indices for the adiabatic, isocurvature and correlation spectra in the case of two field inflation. We also compute the expected non-Gaussianity in such models for specific forms of the potentials. We find that the coupling has little impact on the level of non-Gaussianity during inflation.

  12. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  13. Application of Chebyshev Polynomial to simulated modeling

    Institute of Scientific and Technical Information of China (English)

    CHI Hai-hong; LI Dian-pu

    2006-01-01

    Chebyshev polynomial is widely used in many fields, and used usually as function approximation in numerical calculation. In this paper, Chebyshev polynomial expression of the propeller properties across four quadrants is given at first, then the expression of Chebyshev polynomial is transformed to ordinary polynomial for the need of simulation of propeller dynamics. On the basis of it,the dynamical models of propeller across four quadrants are given. The simulation results show the efficiency of mathematical model.

  14. Collisionless Electrostatic Shock Modeling and Simulation

    Science.gov (United States)

    2016-10-21

    Briefing Charts 3. DATES COVERED (From - To) 30 September 2016 – 21 October 2016 4. TITLE AND SUBTITLE Collisionless Electrostatic Shock Modeling and...release: distribution unlimited. PA#16490 Air Force Research Laboratory Collisionless Electrostatic Shock Modeling and Simulation Daniel W. Crews In-Space...unlimited. PA#16490 Overview • Motivation and Background • What is a Collisionless Shock Wave? • Features of the Collisionless Shock • The Shock Simulation

  15. Defining epidemics in computer simulation models: How do definitions influence conclusions?

    Directory of Open Access Journals (Sweden)

    Carolyn Orbann

    2017-06-01

    Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.

  16. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  17. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results...

  18. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results...

  19. Modeling and simulation of multiport RF switch

    Energy Technology Data Exchange (ETDEWEB)

    Vijay, J [Student, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India); Saha, Ivan [Scientist, Indian Space Research Organisation (ISRO) (India); Uma, G [Lecturer, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India); Umapathy, M [Assistant Professor, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India)

    2006-04-01

    This paper describes the modeling and simulation of 'Multi Port RF Switch' where the latching mechanism is realized with two hot arm electro thermal actuators and the switching action is realized with electrostatic actuators. It can act as single pole single thrown as well as single pole multi thrown switch. The proposed structure is modeled analytically and required parameters are simulated using MATLAB. The analytical simulation results are validated using Finite Element Analysis of the same in the COVENTORWARE software.

  20. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  1. Ingesting a high-dose carbohydrate solution during the cycle section of a simulated Olympic-distance triathlon improves subsequent run performance.

    Science.gov (United States)

    McGawley, Kerry; Shannon, Oliver; Betts, James

    2012-08-01

    The well-established ergogenic benefit of ingesting carbohydrates during single-discipline endurance sports has only been tested once within an Olympic-distance (OD) triathlon. The aim of the present study was to compare the effect of ingesting a 2:1 maltodextrin/fructose solution with a placebo on simulated OD triathlon performance. Six male and 4 female amateur triathletes (age, 25 ± 7 years; body mass, 66.8 ± 9.2 kg; peak oxygen uptake, 4.2 ± 0.6 L·min(-1)) completed a 1500-m swim time-trial and an incremental cycle test to determine peak oxygen uptake before performing 2 simulated OD triathlons. The swim and cycle sections of the main trials were of fixed intensities, while the run section was completed as a time-trial. Two minutes prior to completing every quarter of the cycle participants consumed 202 ± 20 mL of either a solution containing 1.2 g·min(-1) of maltodextrin plus 0.6 g·min(-1) of fructose at 14.4% concentration (CHO) or a sugar-free, fruit-flavored drink (PLA). The time-trial was 4.0% ± 1.3% faster during the CHO versus PLA trial, with run times of 38:43 ± 1:10 min:s and 40:22 ± 1:18 min:s, respectively (p = 0.010). Blood glucose concentrations were higher in the CHO versus PLA trial (p triathlon enhances subsequent 10-km run performance in triathletes.

  2. Traffic Modeling in WCDMA System Level Simulations

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Traffic modeling is a crucial element in WCDMA system level simulations. A clear understanding of the nature of traffic in the WCDMA system and subsequent selection of an appropriate random traffic model are critical to the success of the modeling enterprise. The resultant performances will evidently be of a function that our design has been well adapted to the traffic, channel and user mobility models, and these models are also accurate. In this article, our attention will be focused on modeling voice and WWW data traffic with the SBBP model and Victor model respectively.

  3. Simulation and analysis of a model dinoflagellate predator-prey system

    Science.gov (United States)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  4. The effect of 'running-in' on the tribology and surface morphology of metal-on-metal Birmingham hip resurfacing device in simulator studies.

    Science.gov (United States)

    Vassiliou, K; Elfick, A P D; Scholes, S C; Unsworth, A

    2006-02-01

    It is well documented that hard bearing combinations show a running-in phenomenon in vitro and there is also some evidence of this from retrieval studies. In order to investigate this phenomenon, five Birmingham hip resurfacing devices were tested in a hip wear simulator. One of these (joint 1) was also tested in a friction simulator before, during, and after the wear test and surface analysis was conducted throughout portions of the testing. The wear showed the classical running in with the wear rate falling from 1.84 mm3 per 10(6) cycles for the first 10(6) cycles of testing to 0.24 mm3 per 10(6) cycles over the final 2 x 10(6) cycles of testing. The friction tests suggested boundary lubrication initially, but at 1 x 10(6) cycles a mixed lubrication regime was evident. By 2 x 10(6) cycles the classical Stribeck curve had formed, indicating a considerable contribution from the fluid film at higher viscosities. This continued to be evident at both 3 x 10(6) and 5 x 10(6) cycles. The surface study complements these findings.

  5. Estimation of infiltration rate, run-off and sediment yield under simulated rainfall experiments in upper Pravara Basin, India: Effect of slope angle and grass-cover

    Indian Academy of Sciences (India)

    Veena U Joshi; Devidas T Tambe

    2010-12-01

    The main objective of this study is to measure the effect of slope and grass-cover on in filtration rate, run-off and sediment yield under simulated rainfall conditions in a badland area located in the upper Pravara Basin in western India. An automatic rainfall simulator was designed following Dunne et al (1980) and considering the local conditions. Experiments were conducted on six selected experimental fields of 2 × 2 m within the catchment with distinct variations in surface characteristics –grass-covered area with gentle slope, recently ploughed gently sloping area, area covered by crop residue (moderate slope), bare badland with steep slope, gravelly surface with near flat slope and steep slope with grass-cover. The results indicate subtle to noteworthy variations amongst the plots depending on their slope angle and surface characteristics. An important finding that emerges from the study is that the grass-cover is the most effective measure in inducing infiltration and in turn minimizing run-off and sediment yield. Sediment yields are lowest in gently sloping grass-covered surfaces and highest in bare badland surfaces with steep slopes. These findings have enormous implication for this area, because over 2/3 area is characterized by bare and steep slopes.

  6. A mechanistic model on the role of "radially-running" collagen fibers on dissection properties of human ascending thoracic aorta.

    Science.gov (United States)

    Pal, Siladitya; Tsamis, Alkiviadis; Pasta, Salvatore; D'Amore, Antonio; Gleason, Thomas G; Vorp, David A; Maiti, Spandan

    2014-03-21

    Aortic dissection (AoD) is a common condition that often leads to life-threatening cardiovascular emergency. From a biomechanics viewpoint, AoD involves failure of load-bearing microstructural components of the aortic wall, mainly elastin and collagen fibers. Delamination strength of the aortic wall depends on the load-bearing capacity and local micro-architecture of these fibers, which may vary with age, disease and aortic location. Therefore, quantifying the role of fiber micro-architecture on the delamination strength of the aortic wall may lead to improved understanding of AoD. We present an experimentally-driven modeling paradigm towards this goal. Specifically, we utilize collagen fiber micro-architecture, obtained in a parallel study from multi-photon microscopy, in a predictive mechanistic framework to characterize the delamination strength. We then validate our model against peel test experiments on human aortic strips and utilize the model to predict the delamination strength of separate aortic strips and compare with experimental findings. We observe that the number density and failure energy of the radially-running collagen fibers control the peel strength. Furthermore, our model suggests that the lower delamination strength previously found for the circumferential direction in human aorta is related to a lower number density of radially-running collagen fibers in that direction. Our model sets the stage for an expanded future study that could predict AoD propagation in patient-specific aortic geometries and better understand factors that may influence propensity for occurrence.

  7. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Nan [ORNL; Dimitrovski, Aleksandar D [ORNL; Simunovic, Srdjan [ORNL; Sun, Kai [University of Tennessee (UT)

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generator models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.

  8. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  9. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  10. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  11. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  12. REAL STOCK PRICES AND THE LONG-RUN MONEY DEMAND FUNCTION IN MALAYSIA: Evidence from Error Correction Model

    Directory of Open Access Journals (Sweden)

    Naziruddin Abdullah

    2004-06-01

    Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.

  13. Stability Criterion for Humanoid Running

    Institute of Scientific and Technical Information of China (English)

    LIZhao-Hui; HUANGQiang; LIKe-Jie

    2005-01-01

    A humanoid robot has high mobility but possibly risks of tipping over. Until now, one main topic on humanoid robots is to study the walking stability; the issue of the running stability has rarely been investigated. The running is different from the walking, and is more difficult to maintain its dynamic stability. The objective of this paper is to study the stability criterion for humanoid running based on the whole dynamics. First, the cycle and the dynamics of running are analyzed. Then, the stability criterion of humanoid running is presented. Finally, the effectiveness of the proposed stability criterion is illustrated by a dynamic simulation example using a dynamic analysis and design system (DADS).

  14. Modeling and simulation of luminescence detection platforms.

    Science.gov (United States)

    Salama, Khaled; Eltoukhy, Helmy; Hassibi, Arjang; El-Gamal, Abbas

    2004-06-15

    Motivated by the design of an integrated CMOS-based detection platform, a simulation model for CCD and CMOS imager-based luminescence detection systems is developed. The model comprises four parts. The first portion models the process of photon flux generation from luminescence probes using ATP-based and luciferase label-based assay kinetics. An optics simulator is then used to compute the incident photon flux on the imaging plane for a given photon flux and system geometry. Subsequently, the output image is computed using a detailed imaging sensor model that accounts for photodetector spectral response, dark current, conversion gain, and various noise sources. Finally, signal processing algorithms are applied to the image to enhance detection reliability and hence increase the overall system throughput. To validate the model, simulation results are compared to experimental results obtained from a CCD-based system that was built to emulate the integrated CMOS-based platform.

  15. Standard Model Higgs boson production in the decay mode H->bb in association with a W or Z boson for High Luminosity LHC Running

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00176100; The ATLAS collaboration

    2015-01-01

    A key outstanding observation is the decay of the Higgs boson to b-quarks, motivating a study into the prospects of this channel in future LHC runs. This poster summarises a simulated analysis of Standard Model H->bb decay, produced in association with a vector boson at the ATLAS detector for high-luminosity, 14 TeV proton-proton LHC collisions. Efficiency and resolution smearing functions were applied to generator-level Monte Carlo samples to reproduce the expected performance of the upgraded ATLAS detector, for the foreseen amount of pile-up due to multiple overlapping proton-proton collisions. The expected signal significance and signal strength is presented for 300 fb-1 and 3000 fb-1 with an average pile-up of 60 and 140 respectively.

  16. Standard Model Higgs boson production in the decay mode H->bb in association with a W or Z boson for High Luminosity LHC Running

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00176100

    2016-01-01

    A key outstanding observation is the decay of the Higgs boson to b-quarks, motivating a study into the prospects of this channel in future LHC runs. This proceeding summarises a simulated analysis of Standard Model H->bb decay, produced in association with a vector boson at the ATLAS detector for 14 TeV proton-proton collisions at the high-luminosity LHC. Efficiency and resolution smearing functions were applied to generator-level Monte Carlo samples to reproduce the expected performance of the upgraded ATLAS detector, for the foreseen amount of pile- up due to multiple overlapping proton-proton collisions. The expected signal significance and signal strength is presented for 300/fb and 3000/fb with an average pile-up of 60 and 140 respectively.

  17. SOFT MODELLING AND SIMULATION IN STRATEGY

    Directory of Open Access Journals (Sweden)

    Luciano Rossoni

    2006-06-01

    Full Text Available A certain resistance on the part of the responsible controllers for the strategy exists, in using techniques and tools of modeling and simulation. Many find them excessively complicated, already others see them as rigid and mathematical for excessively for the use of strategies in uncertain and turbulent environments. However, some interpretative boarding that take care of, in part exist, the necessities of these borrowers of decision. The objective of this work is to demonstrate of a clear and simple form, some of the most powerful boarding, methodologies and interpretative tools (soft of modeling and simulation in the business-oriented area of strategy. We will define initially, what they are on models, simulation and some aspects to the modeling and simulation in the strategy area. Later we will see some boarding of modeling soft, that they see the modeling process much more of that simply a mechanical process, therefore, as seen for Simon, the human beings rationally are limited and its decisions are influenced by a series of questions of subjective character, related to the way where it is inserted. Keywords: strategy, modeling and simulation, soft systems methodology, cognitive map, systems dynamics.

  18. Modeling and Simulation of Hydraulic Engine Mounts

    Institute of Scientific and Technical Information of China (English)

    DUAN Shanzhong; Marshall McNea

    2012-01-01

    Hydraulic engine mounts are widely used in automotive powertrains for vibration isolation.A lumped mechanical parameter model is a traditional approach to model and simulate such mounts.This paper presents a dynamical model of a passive hydraulic engine mount with a double-chamber,an inertia track,a decoupler,and a plunger.The model is developed based on analogy between electrical systems and mechanical-hydraulic systems.The model is established to capture both low and high frequency dynatmic behaviors of the hydraulic mount.The model will be further used to find the approximate pulse responses of the mounts in terms of the force transmission and top chamber pressure.The close form solution from the simplifiod linear model may provide some insight into the highly nonlinear behavior of the mounts.Based on the model,computer simulation has been carried out to study dynamic performance of the hydraulic mount.

  19. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels;

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  20. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, Kim; Karstensen, Claus; Condra, Thomas Joseph;

    2003-01-01

    A model for a ue gas boiler covering the ue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been dened for the furnace, the convection zone (split in 2: a zone...... submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic- Equation system (DAE). Subsequently MatLab/Simulink has...... been applied for carrying out the simulations. To be able to verify the simulated results an experiments has been carried out on a full scale boiler plant....

  1. Global ice volume variations through the last glacial cycle simulated by a 3-D ice-dynamical model

    NARCIS (Netherlands)

    Bintanja, R.; Wal, R.S.W. van de; Oerlemans, J.

    2002-01-01

    A coupled ice sheet—ice shelf—bedrock model was run at 20km resolution to simulate the evolution of global ice cover during the last glacial cycle. The mass balance model uses monthly mean temperature and precipitation as input and incorporates the albedo—mass balance feedback. The model is forced b

  2. The 14 TeV LHC Takes Aim at SUSY: A No-Scale Supergravity Model for LHC Run 2

    CERN Document Server

    Li, Tianjun; Nanopoulos, Dimitri V; Walker, Joel W

    2015-01-01

    The Supergravity model named No-Scale ${\\cal F}$-$SU(5)$, which is based upon the flipped $SU$(5) Grand Unified Theory (GUT) with additional TeV-scale vector-like flippon multiplets, has been partially probed during the LHC Run 1 at 7-8 TeV, though the majority of its model space remains viable and should be accessible by the 13-14 TeV LHC during Run 2. The model framework possesses the rather unique capacity to provide a light CP-even Higgs boson mass in the favored 124-126 GeV window while simultaneously retaining a testably light supersymmetry (SUSY) spectrum. We summarize the outlook for No-Scale ${\\cal F}$-$SU(5)$ at the 13-14 TeV LHC and review a promising methodology for the discrimination of its long-chain cascade decay signature. We further show that proportional dependence of all model scales upon the unified gaugino mass $M_{1/2}$ minimizes electroweak fine-tuning, allowing the $Z$-boson mass $M_Z$ to be expressed as an explicit function of $M_{1/2}$, $M_Z^2 = M_Z^2 (M_{1/2}^2)$, with implicit depe...

  3. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented......, focusing on universality of the ac response in the extreme disorder limit. Finally, some important unsolved problems relating to hopping models for ac conduction are listed....

  4. Modeling and simulating of unloading welding transformer

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The simulation model of an unloading welding transformer was established on the basis of MATLAB software, and the modeling principle was described in detail in the paper. The model was made up of three sub-models, i.e. the linear inductor sub-model, the non-linear inductor sub-model and series connection sub-model controlled by current, and these sub-models were jointed together by means of segmented linearization. The simulating results showed that, in the conditions of the high convert frequency and the large cross section of the magnet core of a welding transformer, the non-linear inductor sub-model can be substituted by a linear inductor sub-model in the model; and the leakage reactance in the welding transformer is one of the main reasons of producing over-current and over-voltage in the inverter. The simulation results demonstrate that the over-voltage produced by leakage reactance is nearly two times of the input voltage supplied to the transformer, and the lasting time of over-voltage depends on time constant τ1. With reducing of τ1, the amplitude of the over-current will increase, and the lasting time becomes shorter. Contrarily, with increasing of τ1, the amplitude of the over-current will decrease, and the lasting time becomes longer. The model has played the important role for the development of the inverter resistance welding machine.

  5. Revolutions in energy through modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tatro, M.; Woodard, J.

    1998-08-01

    The development and application of energy technologies for all aspects from generation to storage have improved dramatically with the advent of advanced computational tools, particularly modeling and simulation. Modeling and simulation are not new to energy technology development, and have been used extensively ever since the first commercial computers were available. However, recent advances in computing power and access have broadened the extent and use, and, through increased fidelity (i.e., accuracy) of the models due to greatly enhanced computing power, the increased reliance on modeling and simulation has shifted the balance point between modeling and experimentation. The complex nature of energy technologies has motivated researchers to use these tools to understand better performance, reliability and cost issues related to energy. The tools originated in sciences such as the strength of materials (nuclear reactor containment vessels); physics, heat transfer and fluid flow (oil production); chemistry, physics, and electronics (photovoltaics); and geosciences and fluid flow (oil exploration and reservoir storage). Other tools include mathematics, such as statistics, for assessing project risks. This paper describes a few advancements made possible by these tools and explores the benefits and costs of their use, particularly as they relate to the acceleration of energy technology development. The computational complexity ranges from basic spreadsheets to complex numerical simulations using hardware ranging from personal computers (PCs) to Cray computers. In all cases, the benefits of using modeling and simulation relate to lower risks, accelerated technology development, or lower cost projects.

  6. Inventory Reduction Using Business Process Reengineering and Simulation Modeling.

    Science.gov (United States)

    1996-12-01

    center is analyzed using simulation modeling and business process reengineering (BPR) concepts. The two simulation models were designed and evaluated by...reengineering and simulation modeling offer powerful tools to aid the manager in reducing cycle time and inventory levels.

  7. A Comparison of Biased Simulation Schemes for Stochastic Volatility Models

    NARCIS (Netherlands)

    R. Lord (Roger); R. Koekkoek (Remmert); D.J.C. van Dijk (Dick)

    2006-01-01

    textabstractWhen using an Euler discretisation to simulate a mean-reverting square root process, one runs into the problem that while the process itself is guaranteed to be nonnegative, the discretisation is not. Although an exact and efficient simulation algorithm exists for this process, at presen

  8. Simulation and modeling of turbulent flows

    CERN Document Server

    Gatski, Thomas B; Lumley, John L

    1996-01-01

    This book provides students and researchers in fluid engineering with an up-to-date overview of turbulent flow research in the areas of simulation and modeling. A key element of the book is the systematic, rational development of turbulence closure models and related aspects of modern turbulent flow theory and prediction. Starting with a review of the spectral dynamics of homogenous and inhomogeneous turbulent flows, succeeding chapters deal with numerical simulation techniques, renormalization group methods and turbulent closure modeling. Each chapter is authored by recognized leaders in their respective fields, and each provides a thorough and cohesive treatment of the subject.

  9. A simulation model for probabilistic analysis of Space Shuttle abort modes

    Science.gov (United States)

    Hage, R. T.

    1993-01-01

    A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.

  10. Modeling & Simulation Executive Agent Panel

    Science.gov (United States)

    2007-11-02

    Richard W. ; 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME AND ADDRESS Office of the Oceanographer of the Navy...acquisition, and training communities.” MSEA Role • Facilitator in the project startup phase • Catalyst during development • Certifier in the...ACOUSTIC MODELS Parabolic Equation 5.0 ASTRAL 5.0 ASPM 4.3 Gaussian Ray Bundle 1.0 High Freq Env Acoustic (HFEVA) 1.0 COLOSSUS II 1.0 Low Freq Bottom LOSS

  11. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...

  12. Modelling, simulating and optimizing Boilers

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantication of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to dene parts...

  13. Simulering af dagslys i digitale modeller

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2004-01-01

    Projektet undersøger via forskellige simuleringer af dagslys, kvaliteten af visualiseringer af komplekse lysforhold i digitale modeller i forbindelse med formidling af arkitektur via nettet. I en digital 3D model af Utzon Associates Paustians hus, simulers naturligt dagslysindfald med  forskellig...... Renderingsmetoder som: "shaded render" /  ”raytraceing” /  "Final Gather /  ”Global Illumination”...

  14. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect inc...

  15. Molecular simulation and modeling of complex I.

    Science.gov (United States)

    Hummer, Gerhard; Wikström, Mårten

    2016-07-01

    Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  17. Investigating Output Accuracy for a Discrete Event Simulation Model and an Agent Based Simulation Model

    CERN Document Server

    Majid, Mazlina Abdul; Siebers, Peer-Olaf

    2010-01-01

    In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store's fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.

  18. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  19. Power electronics system modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Jih-Sheng

    1994-12-31

    This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and output current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.

  20. Simulation of Gravity Currents Using VOF Model

    Institute of Scientific and Technical Information of China (English)

    邹建锋; 黄钰期; 应新亚; 任安禄

    2002-01-01

    By the Volume of Fluid (VOF) multiphase flow model two-dimensional gravity currents with three phases including air are numerically simulated in this article. The necessity of consideration of turbulence effect for high Reynolds numbers is demonstrated quantitatively by LES (the Large Eddy Simulation) turbulence model. The gravity currents are simulated for h ≠ H as well as h = H, where h is the depth of the gravity current before the release and H is the depth of the intruded fluid. Uprising of swell occurs when a current flows horizontally into another lighter one for h ≠ H. The problems under what condition the uprising of swell occurs and how long it takes are considered in this article. All the simulated results are in reasonable agreement with the experimental results available.

  1. Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model

    Science.gov (United States)

    Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)

    1982-01-01

    The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.

  2. nIFTy galaxy cluster simulations II: radiative models

    CERN Document Server

    Sembolini, Federico; Pearce, Frazer R; Power, Chris; Knebe, Alexander; Kay, Scott T; Cui, Weiguang; Yepes, Gustavo; Beck, Alexander M; Borgani, Stefano; Cunnama, Daniel; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G; Murante, Giuseppe; Newton, Richard D A; Perret, Valentin; Saro, Alexandro; Schaye, Joop; Teyssier, Romain

    2015-01-01

    We have simulated the formation of a massive galaxy cluster (M$_{200}^{\\rm crit}$ = 1.1$\\times$10$^{15}h^{-1}M_{\\odot}$) in a $\\Lambda$CDM universe using 10 different codes (RAMSES, 2 incarnations of AREPO and 7 of GADGET), modeling hydrodynamics with full radiative subgrid physics. These codes include Smoothed-Particle Hydrodynamics (SPH), spanning traditional and advanced SPH schemes, adaptive mesh and moving mesh codes. Our goal is to study the consistency between simulated clusters modeled with different radiative physical implementations - such as cooling, star formation and AGN feedback. We compare images of the cluster at $z=0$, global properties such as mass, and radial profiles of various dynamical and thermodynamical quantities. We find that, with respect to non-radiative simulations, dark matter is more centrally concentrated, the extent not simply depending on the presence/absence of AGN feedback. The scatter in global quantities is substantially higher than for non-radiative runs. Intriguingly, a...

  3. Development of NASA's Models and Simulations Standard

    Science.gov (United States)

    Bertch, William J.; Zang, Thomas A.; Steele, Martin J.

    2008-01-01

    From the Space Shuttle Columbia Accident Investigation, there were several NASA-wide actions that were initiated. One of these actions was to develop a standard for development, documentation, and operation of Models and Simulations. Over the course of two-and-a-half years, a team of NASA engineers, representing nine of the ten NASA Centers developed a Models and Simulation Standard to address this action. The standard consists of two parts. The first is the traditional requirements section addressing programmatics, development, documentation, verification, validation, and the reporting of results from both the M&S analysis and the examination of compliance with this standard. The second part is a scale for evaluating the credibility of model and simulation results using levels of merit associated with 8 key factors. This paper provides an historical account of the challenges faced by and the processes used in this committee-based development effort. This account provides insights into how other agencies might approach similar developments. Furthermore, we discuss some specific applications of models and simulations used to assess the impact of this standard on future model and simulation activities.

  4. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  5. Modelling and Simulation of Crude Oil Dispersion

    Directory of Open Access Journals (Sweden)

    Abdulfatai JIMOH

    2006-01-01

    Full Text Available This research work was carried out to develop a model equation for the dispersion of crude oil in water. Seven different crude oils (Bonny Light, Antan Terminal, Bonny Medium, Qua Iboe Light, Brass Light Mbede, Forcados Blend and Heavy H were used as the subject crude oils. The developed model equation in this project which is given as...It was developed starting from the equation for the oil dispersion rate in water which is given as...The developed equation was then simulated with the aid of MathCAD 2000 Professional software. The experimental and model results obtained from the simulation of the model equation were plotted on the same axis against time of dispersion. The model results revealed close fittings between the experimental and the model results because the correlation coefficients and the r-square values calculated using Spreadsheet Program were both found to be unity (1.00.

  6. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  7. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  8. Testing turbulent closure models with convection simulations

    CERN Document Server

    Snellman, J E; Mantere, M J; Rheinhardt, M; Dintrans, B

    2012-01-01

    Aims: To compare simple analytical closure models of turbulent Boussinesq convection for stellar applications with direct three-dimensional simulations both in homogeneous and inhomogeneous (bounded) setups. Methods: We use simple analytical closure models to compute the fluxes of angular momentum and heat as a function of rotation rate measured by the Taylor number. We also investigate cases with varying angles between the angular velocity and gravity vectors, corresponding to locating the computational domain at different latitudes ranging from the pole to the equator of the star. We perform three-dimensional numerical simulations in the same parameter regimes for comparison. The free parameters appearing in the closure models are calibrated by two fit methods using simulation data. Unique determination of the closure parameters is possible only in the non-rotating case and when the system is placed at the pole. In the other cases the fit procedures yield somewhat differing results. The quality of the closu...

  9. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  10. Fuzzy rule-based macroinvertebrate habitat suitability models for running waters

    NARCIS (Netherlands)

    Broekhoven, Van E.; Adriaenssens, V.; Baets, De B.; Verdonschot, P.F.M.

    2006-01-01

    A fuzzy rule-based approach was applied to a macroinvertebrate habitat suitability modelling problem. The model design was based on a knowledge base summarising the preferences and tolerances of 86 macroinvertebrate species for four variables describing river sites in springs up to small rivers in t

  11. A production scheduling simulation model for improving production efficiency

    Directory of Open Access Journals (Sweden)

    Cheng-Liang Yang

    2014-12-01

    Full Text Available A real manufacturing system of an electronic company was mimicked by using a simulation model. The effects of dispatching rules and resources allocations on performance measures were explored. The results indicated that the dispatching rules of shortest processing time (SPT and earliest due date are superior to the current rule of first in first out adopted by the company. A new combined rule, the smallest quotient of dividing shortest remaining processing time (SRPT by SPT (SRPT/SPT_Min, has been proposed and demonstrated the best performance on mean tardiness time under the current resources situation. The results also showed that using fewer resources can increase their utilization, but it increases the risk of delivery tardiness as well, which in turn will damage the organization’s reputation in the long run. Some suggestions for future work were presented.

  12. Modeling and simulation with operator scaling

    CERN Document Server

    Cohen, Serge; Rosinski, Jan

    2009-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical applications. A classification of operator stable Levy processes in two dimensions is provided according to their exponents and symmetry groups. We conclude with some remarks and extensions to general operator self-similar processes.

  13. Hemispherical sky simulator for daylighting model studies

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, S.

    1981-07-01

    The design of a 24-foot-diameter hemispherical sky simulator recently completed at LBL is described. The goal was to produce a facility in which large models could be tested; which was suitable for research, teaching, and design; which could provide a uniform sky, an overcast sky, and several clear-sky luminance distributions, as well as accommodating an artificial sun. Initial operating experience with the facility is described, the sky simulator capabilities are reviewed, and its strengths and weaknesses relative to outdoor modeling tests are discussed.

  14. Wind Shear Target Echo Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Xiaoyang Liu

    2015-01-01

    Full Text Available Wind shear is a dangerous atmospheric phenomenon in aviation. Wind shear is defined as a sudden change of speed or direction of the wind. In order to analyze the influence of wind shear on the efficiency of the airplane, this paper proposes a mathematical model of point target rain echo and weather target signal echo based on Doppler effect. The wind field model is developed in this paper, and the antenna model is also studied by using Bessel function. The spectrum distribution of symmetric and asymmetric wind fields is researched by using the mathematical model proposed in this paper. The simulation results are in accordance with radial velocity component, and the simulation results also confirm the correctness of the established model of antenna.

  15. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  16. Battery thermal models for hybrid vehicle simulations

    Science.gov (United States)

    Pesaran, Ahmad A.

    This paper summarizes battery thermal modeling capabilities for: (1) an advanced vehicle simulator (ADVISOR); and (2) battery module and pack thermal design. The National Renewable Energy Laboratory's (NREL's) ADVISOR is developed in the Matlab/Simulink environment. There are several battery models in ADVISOR for various chemistry types. Each one of these models requires a thermal model to predict the temperature change that could affect battery performance parameters, such as resistance, capacity and state of charges. A lumped capacitance battery thermal model in the Matlab/Simulink environment was developed that included the ADVISOR battery performance models. For thermal evaluation and design of battery modules and packs, NREL has been using various computer aided engineering tools including commercial finite element analysis software. This paper will discuss the thermal ADVISOR battery model and its results, along with the results of finite element modeling that were presented at the workshop on "Development of Advanced Battery Engineering Models" in August 2001.

  17. Impact of treadmill running and sex on hippocampal neurogenesis in the mouse model of amyotrophic lateral sclerosis.

    Directory of Open Access Journals (Sweden)

    Xiaoxing Ma

    Full Text Available Hippocampal neurogenesis in the subgranular zone (SGZ of dentate gyrus (DG occurs throughout life and is regulated by pathological and physiological processes. The role of oxidative stress in hippocampal neurogenesis and its response to exercise or neurodegenerative diseases remains controversial. The present study was designed to investigate the impact of oxidative stress, treadmill exercise and sex on hippocampal neurogenesis in a murine model of heightened oxidative stress (G93A mice. G93A and wild type (WT mice were randomized to a treadmill running (EX or a sedentary (SED group for 1 or 4 wk. Immunohistochemistry was used to detect bromodeoxyuridine (BrdU labeled proliferating cells, surviving cells, and their phenotype, as well as for determination of oxidative stress (3-NT; 8-OHdG. BDNF and IGF1 mRNA expression was assessed by in situ hybridization. Results showed that: (1 G93A-SED mice had greater hippocampal neurogenesis, BDNF mRNA, and 3-NT, as compared to WT-SED mice. (2 Treadmill running promoted hippocampal neurogenesis and BDNF mRNA content and lowered DNA oxidative damage (8-OHdG in WT mice. (3 Male G93A mice showed significantly higher cell proliferation but a lower level of survival vs. female G93A mice. We conclude that G93A mice show higher hippocampal neurogenesis, in association with higher BDNF expression, yet running did not further enhance these phenomena in G93A mice, probably due to a 'ceiling effect' of an already heightened basal levels of hippocampal neurogenesis and BDNF expression.

  18. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  19. runmlwin : A Program to Run the MLwiN Multilevel Modeling Software from within Stata

    Directory of Open Access Journals (Sweden)

    George Leckie

    2013-03-01

    Full Text Available We illustrate how to fit multilevel models in the MLwiN package seamlessly from within Stata using the Stata program runmlwin. We argue that using MLwiN and Stata in combination allows researchers to capitalize on the best features of both packages. We provide examples of how to use runmlwin to fit continuous, binary, ordinal, nominal and mixed response multilevel models by both maximum likelihood and Markov chain Monte Carlo estimation.

  20. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  1. Modeling and Simulation Tools: From Systems Biology to Systems Medicine.

    Science.gov (United States)

    Olivier, Brett G; Swat, Maciej J; Moné, Martijn J

    2016-01-01

    Modeling is an integral component of modern biology. In this chapter we look into the role of the model, as it pertains to Systems Medicine, and the software that is required to instantiate and run it. We do this by comparing the development, implementation, and characteristics of tools that have been developed to work with two divergent methodologies: Systems Biology and Pharmacometrics. From the Systems Biology perspective we consider the concept of "Software as a Medical Device" and what this may imply for the migration of research-oriented, simulation software into the domain of human health.In our second perspective, we see how in practice hundreds of computational tools already accompany drug discovery and development at every stage of the process. Standardized exchange formats are required to streamline the model exchange between tools, which would minimize translation errors and reduce the required time. With the emergence, almost 15 years ago, of the SBML standard, a large part of the domain of interest is already covered and models can be shared and passed from software to software without recoding them. Until recently the last stage of the process, the pharmacometric analysis used in clinical studies carried out on subject populations, lacked such an exchange medium. We describe a new emerging exchange format in Pharmacometrics which covers the non-linear mixed effects models, the standard statistical model type used in this area. By interfacing these two formats the entire domain can be covered by complementary standards and subsequently the according tools.

  2. Evaluation of air gap membrane distillation process running under sub-atmospheric conditions: Experimental and simulation studies

    KAUST Repository

    Alsaadi, Ahmad S.

    2015-04-16

    The importance of removing non-condensable gases from air gap membrane distillation (AGMD) modules in improving the water vapor flux is presented in this paper. Additionally, a previously developed AGMD mathematical model is used to predict to the degree of flux enhancement under sub-atmospheric pressure conditions. Since the mathematical model prediction is expected to be very sensitive to membrane distillation (MD) membrane resistance when the mass diffusion resistance is eliminated, the permeability of the membrane was carefully measured with two different methods (gas permeance test and vacuum MD permeability test). The mathematical model prediction was found to highly agree with the experimental data, which showed that the removal of non-condensable gases increased the flux by more than three-fold when the gap pressure was maintained at the saturation pressure of the feed temperature. The importance of staging the sub-atmospheric AGMD process and how this could give better control over the gap pressure as the feed temperature decreases are also highlighted in this paper. The effect of staging on the sub-atmospheric AGMD flux and its relation to membrane capital cost are briefly discussed.

  3. Speeding up N-body simulations of modified gravity: chameleon screening models

    Science.gov (United States)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  4. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  5. TRANSFORM - TRANsient Simulation Framework of Reconfigurable Models

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-01

    Existing development tools for early stage design and scoping of energy systems are often time consuming to use, proprietary, and do not contain the necessary function to model complete systems (i.e., controls, primary, and secondary systems) in a common platform. The Modelica programming language based TRANSFORM tool (1) provides a standardized, common simulation environment for early design of energy systems (i.e., power plants), (2) provides a library of baseline component modules to be assembled into full plant models using available geometry, design, and thermal-hydraulic data, (3) defines modeling conventions for interconnecting component models, and (4) establishes user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  6. EXACT SIMULATION OF A BOOLEAN MODEL

    Directory of Open Access Journals (Sweden)

    Christian Lantuéjoul

    2013-06-01

    Full Text Available A Boolean model is a union of independent objects (compact random subsets located at Poisson points. Two algorithms are proposed for simulating a Boolean model in a bounded domain. The first one applies only to stationary models. It generates the objects prior to their Poisson locations. Two examples illustrate its applicability. The second algorithm applies to stationary and non-stationary models. It generates the Poisson points prior to the objects. Its practical difficulties of implementation are discussed. Both algorithms are based on importance sampling techniques, and the generated objects are weighted.

  7. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  8. Modeling and Simulation of Nuclear Fuel Materials

    Energy Technology Data Exchange (ETDEWEB)

    Devanathan, Ramaswami; Van Brutzel, Laurent; Chartier, Alan; Gueneau, Christine; Mattsson, Ann E.; Tikare, Veena; Bartel, Timothy; Besmann, T. M.; Stan, Marius; Van Uffelen, Paul

    2010-10-01

    We review the state of modeling and simulation of nuclear fuels with emphasis on the most widely used nuclear fuel, UO2. The hierarchical scheme presented represents a science-based approach to modeling nuclear fuels by progressively passing information in several stages from ab initio to continuum levels. Such an approach is essential to overcome the challenges posed by radioactive materials handling, experimental limitations in modeling extreme conditions and accident scenarios, and the small time and distance scales of fundamental defect processes. When used in conjunction with experimental validation, this multiscale modeling scheme can provide valuable guidance to development of fuel for advanced reactors to meet rising global energy demand.

  9. Simulation modeling of health care policy.

    Science.gov (United States)

    Glied, Sherry; Tilipman, Nicholas

    2010-01-01

    Simulation modeling of health reform is a standard part of policy development and, in the United States, a required element in enacting health reform legislation. Modelers use three types of basic structures to build models of the health system: microsimulation, individual choice, and cell-based. These frameworks are filled in with data on baseline characteristics of the system and parameters describing individual behavior. Available data on baseline characteristics are imprecise, and estimates of key empirical parameters vary widely. A comparison of estimated and realized consequences of several health reform proposals suggests that models provided reasonably accurate estimates, with confidence bounds of approximately 30%.

  10. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Science.gov (United States)

    Meloni, Davide; Ohlsson, Tommy; Riad, Stella

    2017-03-01

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10 H, 120 H, and 126 H representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M I. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10 H and 126 H representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  11. Measuring Short- and Long-run Promotional Effectiveness on Scanner Data Using Persistence Modeling

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique); V.R. Nijs; J-B.E.M. Steenkamp (Jan-Benedict)

    2003-01-01

    textabstractThe use of price promotions to stimulate brand and firm performance is increasing. We discuss how (i) the availability of longer scanner data time series, and (ii) persistence modeling, have lead to greater insights into the dynamic effects of price promotions, as one can now quantify th

  12. Modeling and simulation of epidemic spread

    DEFF Research Database (Denmark)

    Shatnawi, Maad; Lazarova-Molnar, Sanja; Zaki, Nazar

    2013-01-01

    and control such epidemics. This paper presents an overview of the epidemic spread modeling and simulation, and summarizes the main technical challenges in this field. It further investigates the most relevant recent approaches carried out towards this perspective and provides a comparison and classification...

  13. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  14. Modeling and Simulating Virtual Anatomical Humans

    NARCIS (Netherlands)

    Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan

    2014-01-01

    This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main chall

  15. Modeling and Simulation in Healthcare Future Directions

    Science.gov (United States)

    2010-07-13

    Quantify performance (Competency - based) 6. Simulate before practice ( Digital Libraries ) Classic Education and Examination What is the REVOLUTION in...av $800,000 yr 2.) Actor patients - $250,000 – $400,000/yr 2. Digital Libraries or synthetic tissue models a. Subscription vs up-front costs

  16. Simulation Versus Models: Which One and When?

    Science.gov (United States)

    Dorn, William S.

    1975-01-01

    Describes two types of computer-based experiments: simulation (which assumes no student knowledge of the workings of the computer program) is recommended for experiments aimed at inductive reasoning; and modeling (which assumes student understanding of the computer program) is recommended for deductive processes. (MLH)

  17. Love Kills:. Simulations in Penna Ageing Model

    Science.gov (United States)

    Stauffer, Dietrich; Cebrat, Stanisław; Penna, T. J. P.; Sousa, A. O.

    The standard Penna ageing model with sexual reproduction is enlarged by adding additional bit-strings for love: Marriage happens only if the male love strings are sufficiently different from the female ones. We simulate at what level of required difference the population dies out.

  18. Inverse modeling for Large-Eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.

    1998-01-01

    Approximate higher order polynomial inversion of the top-hat filter is developed with which the turbulent stress tensor in Large-Eddy Simulation can be consistently represented using the filtered field. Generalized (mixed) similarity models are proposed which improved the agreement with the kinetic

  19. Microdata Simulation Modeling After Twenty Years.

    Science.gov (United States)

    Haveman, Robert H.

    1986-01-01

    This article describes the method and the development of microdata simulation modeling over the past two decades. After tracing a brief history of this evaluation method, its problems and prospects are assessed. The effects of this research method on the development of the social sciences are examined. (JAZ)

  20. Simulation Modeling on the Macintosh using STELLA.

    Science.gov (United States)

    Costanza, Robert

    1987-01-01

    Describes a new software package for the Apple Macintosh computer which can be used to create elaborate simulation models in a fraction of the time usually required without using a programming language. Illustrates the use of the software which relates to water usage. (TW)

  1. Simulation Modeling of Radio Direction Finding Results

    Directory of Open Access Journals (Sweden)

    K. Pelikan

    1994-12-01

    Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.

  2. A Prison/Parole System Simulation Model,

    Science.gov (United States)

    parole system on future prison and parole populations. A simulation model is presented, viewing a prison / parole system as a feedback process for...ciminal offenders . Transitions among the states in which an offender might be located, imprisoned, paroled , and discharged, are assumed to be in...accordance with a discrete time semi-Markov process. Projected prison and parole populations for sample data and applications of the model are discussed. (Author)

  3. Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator

    Science.gov (United States)

    Lewis, Emily K.; Vuong, Nghia D.

    2012-01-01

    This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.

  4. A modeling and simulation framework for electrokinetic nanoparticle treatment

    Science.gov (United States)

    Phillips, James

    2011-12-01

    The focus of this research is to model and provide a simulation framework for the packing of differently sized spheres within a hard boundary. The novel contributions of this dissertation are the cylinders of influence (COI) method and sectoring method implementations. The impetus for this research stems from modeling electrokinetic nanoparticle (EN) treatment, which packs concrete pores with differently sized nanoparticles. We show an improved speed of the simulation compared to previously published results of EN treatment simulation while obtaining similar porosity reduction results. We mainly focused on readily, commercially available particle sizes of 2 nm and 20 nm particles, but have the capability to model other sizes. Our simulation has graphical capabilities and can provide additional data unobtainable from physical experimentation. The data collected has a median of 0.5750 and a mean of 0.5504. The standard error is 0.0054 at alpha = 0.05 for a 95% confidence interval of 0.5504 +/- 0.0054. The simulation has produced maximum packing densities of 65% and minimum packing densities of 34%. Simulation data are analyzed using linear regression via the R statistical language to obtain two equations: one that describes porosity reduction based on all cylinder and particle characteristics, and another that focuses on describing porosity reduction based on cylinder diameter for 2 and 20 nm particles into pores of 100 nm height. Simulation results are similar to most physical results obtained from MIP and WLR. Some MIP results do not fall within the simulation limits; however, this is expected as MIP has been documented to be an inaccurate measure of pore distribution and porosity of concrete. Despite the disagreement between WLR and MIP, there is a trend that porosity reduction is higher two inches from the rebar as compared to the rebar-concrete interface. The simulation also detects a higher porosity reduction further from the rebar. This may be due to particles

  5. Comparing Simulation Results with Traditional PRA Model on a Boiling Water Reactor Station Blackout Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Zhegang Ma; Diego Mandelli; Curtis Smith

    2011-07-01

    A previous study used RELAP and RAVEN to conduct a boiling water reactor station black-out (SBO) case study in a simulation based environment to show the capabilities of the risk-informed safety margin characterization methodology. This report compares the RELAP/RAVEN simulation results with traditional PRA model results. The RELAP/RAVEN simulation run results were reviewed for their input parameters and output results. The input parameters for each simulation run include various timing information such as diesel generator or offsite power recovery time, Safety Relief Valve stuck open time, High Pressure Core Injection or Reactor Core Isolation Cooling fail to run time, extended core cooling operation time, depressurization delay time, and firewater injection time. The output results include the maximum fuel clad temperature, the outcome, and the simulation end time. A traditional SBO PRA model in this report contains four event trees that are linked together with the transferring feature in SAPHIRE software. Unlike the usual Level 1 PRA quantification process in which only core damage sequences are quantified, this report quantifies all SBO sequences, whether they are core damage sequences or success (i.e., non core damage) sequences, in order to provide a full comparison with the simulation results. Three different approaches were used to solve event tree top events and quantify the SBO sequences: “W” process flag, default process flag without proper adjustment, and default process flag with adjustment to account for the success branch probabilities. Without post-processing, the first two approaches yield incorrect results with a total conditional probability greater than 1.0. The last approach accounts for the success branch probabilities and provides correct conditional sequence probabilities that are to be used for comparison. To better compare the results from the PRA model and the simulation runs, a simplified SBO event tree was developed with only four

  6. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1.

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard [Formula: see text] and [Formula: see text] gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. Taking into account the constraints from [Formula: see text] processes, significant departures from the SM predictions for [Formula: see text] and [Formula: see text] are possible, while the effects in B decays are much smaller. In particular, the LHT model favours [Formula: see text], which is not supported by the data, and the present anomalies in [Formula: see text] decays cannot be explained in this model. With the recent lattice and large N input the imposition of the [Formula: see text] constraint implies a significant suppression of the branching ratio for [Formula: see text] with respect to its SM value while allowing only for small modifications of [Formula: see text]. Finally, we investigate how the LHT physics could be distinguished from other models by means of indirect measurements and

  7. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    Science.gov (United States)

    Blanke, Monika; Buras, Andrzej J.; Recksiegel, Stefan

    2016-04-01

    The Littlest Higgs model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. The latter originate in the interactions of ordinary quarks and leptons with heavy mirror quarks and leptons that are mediated by new heavy gauge bosons. Also a heavy fermionic top partner is present in this model which communicates with the SM fermions by means of standard W^± and Z^0 gauge bosons. We present a new analysis of quark flavour observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare K and B decays are still allowed to depart from their SM values. This includes K^+→ π ^+ν bar{ν }, KL→ π ^0ν bar{ν }, K_L→ μ ^+μ ^-, B→ X_sγ , B_{s,d}→ μ ^+μ ^-, B→ K^{(*)}ℓ ^+ℓ ^-, B→ K^{(*)}ν bar{ν }, and \\varepsilon '/\\varepsilon . Taking into account the constraints from Δ F=2 processes, significant departures from the SM predictions for K^+→ π ^+ν bar{ν } and KL→ π ^0ν bar{ν } are possible, while the effects in B decays are much smaller. In particular, the LHT model favours B(Bs→ μ ^+μ ^-) ≥ B(Bs→ μ ^+μ ^-)_SM, which is not supported by the data, and the present anomalies in B→ K^{(*)}ℓ ^+ℓ ^- decays cannot be explained in this model. With the recent lattice and large N input the imposition of the \\varepsilon '/\\varepsilon constraint implies a significant suppression of the branching ratio for KL→ π ^0ν bar{ν } with respect to its SM value while allowing only for small modifications of K^+→ π ^+ν bar{ν }. Finally, we investigate how the LHT physics could be distinguished from other models by means of

  8. Stochastic Simulations of Cellular Biological Processes

    Science.gov (United States)

    2007-06-01

    model kinetics of a system of chemical reactions is to use a stochastic 2. Stochastic Simulation Algorithm approach in terms of the Chemical Master...number of processors and running time) for interactive disk spae ad, herfor, my ceat meory simulations. Therefore, in addition to running in an...management problems for simulations involving a large inteative mode, foNScan as o run in ’n number of long runs or for large reaction networks. interactive

  9. Software development infrastructure for the HYBRID modeling and simulation project

    Energy Technology Data Exchange (ETDEWEB)

    Aaron S. Epiney; Robert A. Kinoshita; Jong Suk Kim; Cristian Rabiti; M. Scott Greenwood

    2016-09-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  10. Influential factors of red-light running at signalized intersection and prediction using a rare events logistic regression model.

    Science.gov (United States)

    Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan

    2016-10-01

    Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly.

  11. Quark flavour observables in the Littlest Higgs model with T-parity after LHC Run 1

    CERN Document Server

    Blanke, Monika; Recksiegel, Stefan

    2016-01-01

    The Littlest Higgs Model with T-parity (LHT) belongs to the simplest new physics scenarios with new sources of flavour and CP violation. We present a new analysis of quark observables in the LHT model in view of the oncoming flavour precision era. We use all available information on the CKM parameters, lattice QCD input and experimental data on quark flavour observables and corresponding theoretical calculations, taking into account new lower bounds on the symmetry breaking scale and the mirror quark masses from the LHC. We investigate by how much the branching ratios for a number of rare $K$ and $B$ decays are still allowed to depart from their SM values. This includes $K^+\\to\\pi^+\

  12. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  13. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  14. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  15. The Effect of Treadmill Running on Passive Avoidance Learning in Animal Model of Alzheimer Disease

    OpenAIRE

    Nasrin Hosseini; Hojjatallah Alaei; Parham Reisi; Maryam Radahmadi

    2013-01-01

    Background : Alzheimer′s disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis. It plays an important role in the brain functions such as learning and memory. Loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of Alzheimer′s disease. Previous studies reported that exercise...

  16. Classically conformal U(1)' extended standard model, electroweak vacuum stability, and LHC Run-2 bounds

    CERN Document Server

    Das, Arindam; Okada, Nobuchika; Takahashi, Dai-suke

    2016-01-01

    We consider the minimal U(1)' extension of the Standard Model (SM) with the classically conformal invariance, where an anomaly free U(1)' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1)' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1)' gauge symmetry is broken through the Coleman-Weinberg mechanism, generating the mass terms of the U(1)' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1)' Higgs field and the SM Higgs doublet field, the radiative U(1)' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark ($m_t=173.34$ GeV) and the Higgs boson ($m_h=125.09$ GeV), we perform parameter scans t...

  17. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  18. Co-simulation of dynamic systems in parallel and serial model configurations

    Energy Technology Data Exchange (ETDEWEB)

    Sweafford, Trevor [General Motors, Milford (United States); Yoon, Hwan Sik [The University of Alabama, Tuscaloosa (United States)

    2013-12-15

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  19. Notes of Numerical Simulation of Summer Rainfall in China with a Regional Climate Model REMO

    Institute of Scientific and Technical Information of China (English)

    CUI Xuefeng; HUANG Gang; CHEN Wen

    2008-01-01

    Regional climate models are major tools for regional climate simulation and their output are mostly used for climate impact studies. Notes are reported from a series of numerical simulations of summer rainfall in China with a regional climate model. Domain sizes and running modes are major foci. The results reveal that the model in forecast mode driven by "perfect" boundaries could reasonably represent the inter-annual differences: heavy rainfall along the Yangtze River in 1998 and dry conditions in 1997. Model simulation in climate mode differs to a greater extent from observation than that in forecast mode. This may be due to the fact that in climate mode it departs further from the driving fields and relies more on internal model dynamical processes. A smaller domain in climate mode outperforms a larger one. Further development of model parameterizations including dynamic vegetation are encouraged in future studies.

  20. nIFTy galaxy cluster simulations - II. Radiative models

    Science.gov (United States)

    Sembolini, Federico; Elahi, Pascal Jahan; Pearce, Frazer R.; Power, Chris; Knebe, Alexander; Kay, Scott T.; Cui, Weiguang; Yepes, Gustavo; Beck, Alexander M.; Borgani, Stefano; Cunnama, Daniel; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Newton, Richard D. A.; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Schaye, Joop; Teyssier, Romain

    2016-07-01

    We have simulated the formation of a massive galaxy cluster (M_{200}^crit = 1.1 × 1015 h-1 M⊙) in a Λ cold dark matter universe using 10 different codes (RAMSES, 2 incarnations of AREPO and 7 of GADGET), modelling hydrodynamics with full radiative subgrid physics. These codes include smoothed-particle hydrodynamics (SPH), spanning traditional and advanced SPH schemes, adaptive mesh and moving mesh codes. Our goal is to study the consistency between simulated clusters modelled with different radiative physical implementations - such as cooling, star formation and thermal active galactic nucleus (AGN) feedback. We compare images of the cluster at z = 0, global properties such as mass, and radial profiles of various dynamical and thermodynamical quantities. We find that, with respect to non-radiative simulations, dark matter is more centrally concentrated, the extent not simply depending on the presence/absence of AGN feedback. The scatter in global quantities is substantially higher than for non-radiative runs. Intriguingly, adding radiative physics seems to have washed away the marked code-based differences present in the entropy profile seen for non-radiative simulations in Sembolini et al.: radiative physics + classic SPH can produce entropy cores, at least in the case of non cool-core clusters. Furthermore, the inclusion/absence of AGN feedback is not the dividing line -as in the case of describing the stellar content - for whether a code produces an unrealistic temperature inversion and a falling central entropy profile. However, AGN feedback does strongly affect the overall stellar distribution, limiting the effect of overcooling and reducing sensibly the stellar fraction.

  1. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  2. Fast Atmosphere-Ocean Model Runs with Large Changes in CO2

    Science.gov (United States)

    Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.

    2013-01-01

    How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.

  3. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  4. Modelling and simulation of affinity membrane adsorption.

    Science.gov (United States)

    Boi, Cristiana; Dimartino, Simone; Sarti, Giulio C

    2007-08-24

    A mathematical model for the adsorption of biomolecules on affinity membranes is presented. The model considers convection, diffusion and adsorption kinetics on the membrane module as well as the influence of dead end volumes and lag times; an analysis of flow distribution on the whole system is also included. The parameters used in the simulations were obtained from equilibrium and dynamic experimental data measured for the adsorption of human IgG on A2P-Sartoepoxy affinity membranes. The identification of a bi-Langmuir kinetic mechanisms for the experimental system investigated was paramount for a correct process description and the simulated breakthrough curves were in good agreement with the experimental data. The proposed model provides a new insight into the phenomena involved in the adsorption on affinity membranes and it is a valuable tool to assess the use of membrane adsorbers in large scale processes.

  5. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  6. A Superbubble Feedback Model for Galaxy Simulations

    CERN Document Server

    Keller, B W; Benincasa, S M; Couchman, H M P

    2014-01-01

    We present a new stellar feedback model that reproduces superbubbles. Superbubbles from clustered young stars evolve quite differently to individual supernovae and are substantially more efficient at generating gas motions. The essential new components of the model are thermal conduction, sub-grid evaporation and a sub-grid multi-phase treatment for cases where the simulation mass resolution is insufficient to model the early stages of the superbubble. The multi-phase stage is short compared to superbubble lifetimes. Thermal conduction physically regulates the hot gas mass without requiring a free parameter. Accurately following the hot component naturally avoids overcooling. Prior approaches tend to heat too much mass, leaving the hot ISM below $10^6$ K and susceptible to rapid cooling unless ad-hoc fixes were used. The hot phase also allows feedback energy to correctly accumulate from multiple, clustered sources, including stellar winds and supernovae. We employ high-resolution simulations of a single star ...

  7. Vmax estimate from three-parameter critical velocity models: validity and impact on 800 m running performance prediction.

    Science.gov (United States)

    Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc

    2006-05-01

    The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83models had a better association (0.93models (0.83models appear to have a better predictive value for short duration events such as the 800 m, the fact the Vmax is not associated with the ability it is supposed to reflect suggests that they are more empirical than systems models.

  8. Improved storage efficiency through geologic modeling and reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ammer, J.R.; Mroz, T.H.; Covatch, G.L.

    1997-11-01

    The US Department of Energy (DOE), through partnerships with industry, is demonstrating the importance of geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. The geologic modeling and reservoir simulation study for the Natural Fuel Gas Supply Corporation CRADA was completed in September 1995. The results of this study were presented at the 1995 Society of Petroleum Engineers` (SPE) Eastern Regional Meeting. Although there has been no field verification of the modeling results, the study has shown the potential advantages and cost savings opportunities of using horizontal wells for storage enhancement. The geologic modeling for the Equitrans` CRADA was completed in September 1995 and was also presented at the 1995 SPE Eastern Regional Meeting. The reservoir modeling of past field performance was completed in November 1996 and prediction runs are currently being made to investigate the potential of offering either a 10 day or 30 day peaking service in addition to the existing 110 day base load service. Initial results have shown that peaking services can be provided through remediation of well damage and by drilling either several new vertical wells or one new horizontal well. The geologic modeling for the Northern Indiana Public Service Company CRADA was completed in November 1996 with a horizontal well being completed in January 1997. Based on well test results, the well will significantly enhance gas deliverability from the field and will allow the utilization of gas from an area of the storage field that was not accessible from their existing vertical wells. Results are presented from these three case studies.

  9. Advancing Material Models for Automotive Forming Simulations

    Science.gov (United States)

    Vegter, H.; An, Y.; ten Horn, C. H. L. J.; Atzema, E. H.; Roelofsen, M. E.

    2005-08-01

    Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path. The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary. Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials. Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations

  10. Applying profile- and catchment-based mathematical models for evaluating the run-off from a Nordic catchment

    Directory of Open Access Journals (Sweden)

    Farkas Csilla

    2016-09-01

    Full Text Available Knowledge of hydrological processes and water balance elements are important for climate adaptive water management as well as for introducing mitigation measures aiming to improve surface water quality. Mathematical models have the potential to estimate changes in hydrological processes under changing climatic or land use conditions. These models, indeed, need careful calibration and testing before being applied in decision making. The aim of this study was to compare the capability of five different hydrological models to predict the runoff and the soil water balance elements of a small catchment in Norway. The models were harmonised and calibrated against the same data set. In overall, a good agreement between the measured and simulated runoff was obtained for the different models when integrating the results over a week or longer periods. Model simulations indicate that forest appears to be very important for the water balance in the catchment, and that there is a lack of information on land use specific water balance elements. We concluded that joint application of hydrological models serves as a good background for ensemble modelling of water transport processes within a catchment and can highlight the uncertainty of models forecast.

  11. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  12. Dynamics modeling and simulation of flexible airships

    Science.gov (United States)

    Li, Yuwen

    The resurgence of airships has created a need for dynamics models and simulation capabilities of these lighter-than-air vehicles. The focus of this thesis is a theoretical framework that integrates the flight dynamics, structural dynamics, aerostatics and aerodynamics of flexible airships. The study begins with a dynamics model based on a rigid-body assumption. A comprehensive computation of aerodynamic effects is presented, where the aerodynamic forces and moments are categorized into various terms based on different physical effects. A series of prediction approaches for different aerodynamic effects are unified and applied to airships. The numerical results of aerodynamic derivatives and the simulated responses to control surface deflection inputs are verified by comparing to existing wind-tunnel and flight test data. With the validated aerodynamics and rigid-body modeling, the equations of motion of an elastic airship are derived by the Lagrangian formulation. The airship is modeled as a free-free Euler-Bernoulli beam and the bending deformations are represented by shape functions chosen as the free-free normal modes. In order to capture the coupling between the aerodynamic forces and the structural elasticity, local velocity on the deformed vehicle is used in the computation of aerodynamic forces. Finally, with the inertial, gravity, aerostatic and control forces incorporated, the dynamics model of a flexible airship is represented by a single set of nonlinear ordinary differential equations. The proposed model is implemented as a dynamics simulation program to analyze the dynamics characteristics of the Skyship-500 airship. Simulation results are presented to demonstrate the influence of structural deformation on the aerodynamic forces and the dynamics behavior of the airship. The nonlinear equations of motion are linearized numerically for the purpose of frequency domain analysis and for aeroelastic stability analysis. The results from the latter for the

  13. Simulation modeling perspectives of the Bangladesh family planning and female education system.

    Science.gov (United States)

    Teel, J H; Ragade, R K

    1984-07-01

    A systems dynamics simulation study of the interaction of various social subsystems in the People's Republic of Bangladesh is chosen to address integrated planning concerns. It is concluded that one should not underestimate the potential of noneconomic societal forces: They can have a positive impact on slowing population growth and improving the quality of life. Methodologies included: fuzzy profiles for choosing primary variables; interpretive impact matrices to generate the systems dynamics equations; interactive computer capabilities for purposes other than simulation runs; modeling log file to note modeling assumptions, changes, and redefinitions; and microcomputer portability.

  14. Clouds and Precipitation Simulated by the US DOE Accelerated Climate Modeling for Energy (ACME)

    Science.gov (United States)

    Xie, S.; Lin, W.; Yoon, J. H.; Ma, P. L.; Rasch, P. J.; Ghan, S.; Zhang, K.; Zhang, Y.; Zhang, C.; Bogenschutz, P.; Gettelman, A.; Larson, V. E.; Neale, R. B.; Park, S.; Zhang, G. J.

    2015-12-01

    A new US Department of Energy (DOE) climate modeling effort is to develop an Accelerated Climate Model for Energy (ACME) to accelerate the development and application of fully coupled, state-of-the-art Earth system models for scientific and energy application. ACME is a high-resolution climate model with a 0.25 degree in horizontal and more than 60 levels in the vertical. It starts from the Community Earth System Model (CESM) with notable changes to its physical parameterizations and other components. This presentation provides an overview on the ACME model's capability in simulating clouds and precipitation and its sensitivity to convection schemes. Results with using several state-of-the-art cumulus convection schemes, including those unified parameterizations that are being developed in the climate community, will be presented. These convection schemes are evaluated in a multi-scale framework including both short-range hindcasts and free-running climate simulations with both satellite data and ground-based measurements. Running climate model in short-range hindcasts has been proven to be an efficient way to understand model deficiencies. The analysis is focused on those systematic errors in clouds and precipitation simulations that are shared in many climate models. The goal is to understand what model deficiencies might be primarily responsible for these systematic errors.

  15. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  16. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  17. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  18. Sediment management of run-of-river hydroelectric power project in the Himalayan region using hydraulic model studies

    Indian Academy of Sciences (India)

    NEENA ISAAC; T I ELDHO

    2017-07-01

    Storage capacity of hydropower reservoirs is lost due to sediment deposition. The problem is severe in projects located on rivers with high sediment concentration during the flood season. Removing the sediment deposition hydraulically by drawdown flushing is one of the most effective methods for restoring the storagecapacity. Effectiveness of the flushing depends on various factors, as most of them are site specific. Physical/mathematical models can be effectively used to simulate the flushing operation, and based on the results of the simulation, the layout design and operation schedule of such projects can be modified for better sediment management. This paper presents the drawdown flushing studies of the reservoir of a Himalayan River Hydroelectric Project called Kotlibhel in Uttarakhand, India. For the hydraulic model studies, a 1:100 scale geometrically similar model was constructed. Simulation studies in the model indicated that drawdown flushing for duration of 12 h with a discharge of 500 m3/s or more is effective in removing the annual sediment deposition in the reservoir. The model studies show that the sedimentation problem of the reservoir can be effectively managed through hydraulic flushing.

  19. An improved Peronnet-Thibault mathematical model of human running performance.

    Science.gov (United States)

    Alvarez-Ramirez, Jose

    2002-04-01

    Using an improved Peronnet-Thibault model to analyse the maximal power available during exercise, it was found that a 3rd-order relaxation process for the decreasing dynamics of aerobic power can describe accurately the data available for world track records and aerobic-to-total energy ratio (ATER). It was estimated that the time-scales for the decreasing dynamics are around 25 s for anaerobic power output and that they range from 2.12 h to 7.8 days for aerobic power output. In agreement with experimental evidence, the ATER showed a rapid increase during the first 300 s of exercise duration, to achieve an asymptote close to 100% after 1,000 s. In addition, the transition time when the ATER rose above 50% was found to be at a race duration of about 100 s, which would correspond to race distances of about 800 m. The results suggest that the aerobic power output achieves its maximal value at 300-400 s, and reaches a plateau at 26-28 W.kg(-1) that lasts about 5,000 s. After this period, the aerobic power output decreases slowly due to the contribution of long time-scale metabolic processes having smaller energy contributions (about 30% to 40% of the total aerobic power output).

  20. Regional on-road vehicle running emissions modeling and evaluation for conventional and alternative vehicle technologies.

    Science.gov (United States)

    Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M

    2009-11-01

    This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.