WorldWideScience

Sample records for models simulate fine

  1. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  2. Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates

    Science.gov (United States)

    Walton, Otis R.; Johnson, Scott M.

    2010-01-01

    The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging

  3. QUIESCENT PROMINENCES IN THE ERA OF ALMA: SIMULATED OBSERVATIONS USING THE 3D WHOLE-PROMINENCE FINE STRUCTURE MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Gunár, Stanislav; Heinzel, Petr [Astronomical Institute, The Czech Academy of Sciences, 25165 Ondřejov (Czech Republic); Mackay, Duncan H. [School of Mathematics and Statistics, University of St Andrews, North Haugh, St Andrews KY16 9SS (United Kingdom); Anzer, Ulrich [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85740 Garching bei München (Germany)

    2016-12-20

    We use the detailed 3D whole-prominence fine structure model to produce the first simulated high-resolution ALMA observations of a modeled quiescent solar prominence. The maps of synthetic brightness temperature and optical thickness shown in the present paper are produced using a visualization method for synthesis of the submillimeter/millimeter radio continua. We have obtained the simulated observations of both the prominence at the limb and the filament on the disk at wavelengths covering a broad range that encompasses the full potential of ALMA. We demonstrate here extent to which the small-scale and large-scale prominence and filament structures will be visible in the ALMA observations spanning both the optically thin and thick regimes. We analyze the relationship between the brightness and kinetic temperature of the prominence plasma. We also illustrate the opportunities ALMA will provide for studying the thermal structure of the prominence plasma from the cores of the cool prominence fine structure to the prominence–corona transition region. In addition, we show that detailed 3D modeling of entire prominences with their numerous fine structures will be important for the correct interpretation of future ALMA observations of prominences.

  4. A triple-scale crystal plasticity modeling and simulation on size effect due to fine-graining

    International Nuclear Information System (INIS)

    Kurosawa, Eisuke; Aoyagi, Yoshiteru; Tadano, Yuichi; Shizawa, Kazuyuki

    2010-01-01

    In this paper, a triple-scale crystal plasticity model bridging three hierarchical material structures, i.e., dislocation structure, grain aggregate and practical macroscopic structure is developed. Geometrically necessary (GN) dislocation density and GN incompatibility are employed so as to describe isolated dislocations and dislocation pairs in a grain, respectively. Then the homogenization method is introduced into the GN dislocation-crystal plasticity model for derivation of the governing equation of macroscopic structure with the mathematical and physical consistencies. Using the present model, a triple-scale FE simulation bridging the above three hierarchical structures is carried out for f.c.c. polycrystals with different mean grain size. It is shown that the present model can qualitatively reproduce size effects of macroscopic specimen with ultrafine-grain, i.e., the increase of initial yield stress, the decrease of hardening ratio after reaching tensile strength and the reduction of tensile ductility with decrease of its grain size. Moreover, the relationship between macroscopic yielding of specimen and microscopic grain yielding is discussed and the mechanism of the poor tensile ductility due to fine-graining is clarified. (author)

  5. Haptic rendering for simulation of fine manipulation

    CERN Document Server

    Wang, Dangxiao; Zhang, Yuru

    2014-01-01

    This book introduces the latest progress in six degrees of freedom (6-DoF) haptic rendering with the focus on a new approach for simulating force/torque feedback in performing tasks that require dexterous manipulation skills. One of the major challenges in 6-DoF haptic rendering is to resolve the conflict between high speed and high fidelity requirements, especially in simulating a tool interacting with both rigid and deformable objects in a narrow space and with fine features. The book presents a configuration-based optimization approach to tackle this challenge. Addressing a key issue in man

  6. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  7. Multiscale modeling and nested simulations of three-dimensional ionospheric plasmas: Rayleigh–Taylor turbulence and nonequilibrium layer dynamics at fine scales

    International Nuclear Information System (INIS)

    Mahalov, Alex

    2014-01-01

    Multiscale modeling and high resolution three-dimensional simulations of nonequilibrium ionospheric dynamics are major frontiers in the field of space sciences. The latest developments in fast computational algorithms and novel numerical methods have advanced reliable forecasting of ionospheric environments at fine scales. These new capabilities include improved physics-based predictive modeling, nesting and implicit relaxation techniques that are designed to integrate models of disparate scales. A range of scales, from mesoscale to ionospheric microscale, are included in a 3D modeling framework. Analyses and simulations of primary and secondary Rayleigh–Taylor instabilities in the equatorial spread F (ESF), the response of the plasma density to the neutral turbulent dynamics, and wave breaking in the lower region of the ionosphere and nonequilibrium layer dynamics at fine scales are presented for coupled systems (ions, electrons and neutral winds), thus enabling studies of mesoscale/microscale dynamics for a range of altitudes that encompass the ionospheric E and F layers. We examine the organizing mixing patterns for plasma flows, which occur due to polarized gravity wave excitations in the neutral field, using Lagrangian coherent structures (LCS). LCS objectively depict the flow topology and the extracted scintillation-producing irregularities that indicate a generation of ionospheric density gradients, due to the accumulation of plasma. The scintillation effects in propagation, through strongly inhomogeneous ionospheric media, are induced by trapping electromagnetic (EM) waves in parabolic cavities, which are created by the refractive index gradients along the propagation paths. (paper)

  8. The influence of model spatial resolution on simulated ozone and fine particulate matter for Europe: implications for health impact assessments

    Science.gov (United States)

    Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.

    2018-04-01

    We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when

  9. Internal variability of fine-scale components of meteorological fields in extended-range limited-area model simulations with atmospheric and surface nudging

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei

    2015-09-01

    Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.

  10. An Efficient Upscaling Process Based on a Unified Fine-scale Multi-Physics Model for Flow Simulation in Naturally Fracture Carbonate Karst Reservoirs

    KAUST Repository

    Bi, Linfeng

    2009-01-01

    The main challenges in modeling fluid flow through naturally-fractured carbonate karst reservoirs are how to address various flow physics in complex geological architectures due to the presence of vugs and caves which are connected via fracture networks at multiple scales. In this paper, we present a unified multi-physics model that adapts to the complex flow regime through naturally-fractured carbonate karst reservoirs. This approach generalizes Stokes-Brinkman model (Popov et al. 2007). The fracture networks provide the essential connection between the caves in carbonate karst reservoirs. It is thus very important to resolve the flow in fracture network and the interaction between fractures and caves to better understand the complex flow behavior. The idea is to use Stokes-Brinkman model to represent flow through rock matrix, void caves as well as intermediate flows in very high permeability regions and to use an idea similar to discrete fracture network model to represent flow in fracture network. Consequently, various numerical solution strategies can be efficiently applied to greatly improve the computational efficiency in flow simulations. We have applied this unified multi-physics model as a fine-scale flow solver in scale-up computations. Both local and global scale-up are considered. It is found that global scale-up has much more accurate than local scale-up. Global scale-up requires the solution of global flow problems on fine grid, which generally is computationally expensive. The proposed model has the ability to deal with large number of fractures and caves, which facilitate the application of Stokes-Brinkman model in global scale-up computation. The proposed model flexibly adapts to the different flow physics in naturally-fractured carbonate karst reservoirs in a simple and effective way. It certainly extends modeling and predicting capability in efficient development of this important type of reservoir.

  11. How fine is fine enough when doing CFD terrain simulations

    DEFF Research Database (Denmark)

    Sørensen, Niels N.; Bechmann, Andreas; Réthoré, Pierre-Elouan

    2012-01-01

    The present work addresses the problemof establishing the necessary grid resolution to obtain a given level of numerical accuracy using a CFD model for prediction of flow over terrain. It is illustrated, that a very high resolution may be needed if the numerical difference between consecutive...

  12. Dynamic Simulation of Random Packing of Polydispersive Fine Particles

    Science.gov (United States)

    Ferraz, Carlos Handrey Araujo; Marques, Samuel Apolinário

    2018-02-01

    In this paper, we perform molecular dynamic (MD) simulations to study the two-dimensional packing process of both monosized and random size particles with radii ranging from 1.0 to 7.0 μm. The initial positions as well as the radii of five thousand fine particles were defined inside a rectangular box by using a random number generator. Both the translational and rotational movements of each particle were considered in the simulations. In order to deal with interacting fine particles, we take into account both the contact forces and the long-range dispersive forces. We account for normal and static/sliding tangential friction forces between particles and between particle and wall by means of a linear model approach, while the long-range dispersive forces are computed by using a Lennard-Jones-like potential. The packing processes were studied assuming different long-range interaction strengths. We carry out statistical calculations of the different quantities studied such as packing density, mean coordination number, kinetic energy, and radial distribution function as the system evolves over time. We find that the long-range dispersive forces can strongly influence the packing process dynamics as they might form large particle clusters, depending on the intensity of the long-range interaction strength.

  13. A comparison of three approaches for simulating fine-scale surface winds in support of wildland fire management: Part I. Model formulation and comparison against measurements

    Science.gov (United States)

    Jason M. Forthofer; Bret W. Butler; Natalie S. Wagenbrenner

    2014-01-01

    For this study three types of wind models have been defined for simulating surface wind flow in support of wildland fire management: (1) a uniform wind field (typically acquired from coarse-resolution (,4 km) weather service forecast models); (2) a newly developed mass-conserving model and (3) a newly developed mass and momentumconserving model (referred to as the...

  14. Future changes in the climatology of the Great Plains low-level jet derived from fine resolution multi-model simulations.

    Science.gov (United States)

    Tang, Ying; Winkler, Julie; Zhong, Shiyuan; Bian, Xindi; Doubler, Dana; Yu, Lejiang; Walters, Claudia

    2017-07-10

    The southerly Great Plains low-level jet (GPLLJ) is one of the most significant circulation features of the central U.S. linking large-scale atmospheric circulation with the regional climate. GPLLJs transport heat and moisture, contribute to thunderstorm and severe weather formation, provide a corridor for the springtime migration of birds and insects, enhance wind energy availability, and disperse air pollution. We assess future changes in GPLLJ frequency using an eight member ensemble of dynamically-downscaled climate simulations for the mid-21st century. Nocturnal GPLLJ frequency is projected to increase in the southern plains in spring and in the central plains in summer, whereas current climatological patterns persist into the future for daytime and cool season GPLLJs. The relationship between future GPLLJ frequency and the extent and strength of anticyclonic airflow over eastern North America varies with season. Most simulations project a westward shift of anticyclonic airflow in summer, but uncertainty is larger for spring with only half of the simulations suggesting a westward expansion. The choice of regional climate model and the driving lateral boundary conditions have a large influence on the projected future changes in GPLLJ frequency and highlight the importance of multi-model ensembles to estimate the uncertainty surrounding the future GPLLJ climatology.

  15. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  16. Hydrological modelling of fine sediments in the Odzi River, Zimbabwe

    African Journals Online (AJOL)

    Hydrological modelling of fine sediments in the Odzi River, Zimbabwe. ... An analysis of the model structure and a comparison with the rating curve function ... model validation through split sample and proxy basin comparison was performed.

  17. Numerical modelling of hydro-morphological processes dominated by fine suspended sediment in a stormwater pond

    Science.gov (United States)

    Guan, Mingfu; Ahilan, Sangaralingam; Yu, Dapeng; Peng, Yong; Wright, Nigel

    2018-01-01

    Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality.

  18. Modelling coupled sedimentation and consolidation of fine slurries

    Energy Technology Data Exchange (ETDEWEB)

    Masala, S. [Klohn Crippen Berger, Calgary, AB (Canada)

    2010-07-01

    This article presented a model to simulate and successfully predict the essential elements of sedimentation and consolidation as a coupled process, bringing together separately developed models from chemistry and geology/geotechnical engineering, respectively. The derived model is for a 1-dimensional simultaneous sedimentation and consolidation of a solid-liquid suspension that uses permeability as the unifying concept for the hydrodynamic interaction between solid and liquid in a suspension. The numerical solution relies on an explicit finite difference procedure in material coordinates, and an Euler forward-marching scheme was used for advancing the solution in time. The problem of internal discontinuities was solved by way of convenient numerical solutions and Lagrangian coordinates. Java-based SECO software with a user-friendly graphical user interface (GUI) was used to implement the model, allowing the solution process to be visualized and animated. The software functionality along with GUI and programming issues were discussed at length. A fine-grained suspension data set was used to validate the model. 10 refs., 12 figs.

  19. Strained spiral vortex model for turbulent fine structure

    Science.gov (United States)

    Lundgren, T. S.

    1982-01-01

    A model for the intermittent fine structure of high Reynolds number turbulence is proposed. The model consists of slender axially strained spiral vortex solutions of the Navier-Stokes equation. The tightening of the spiral turns by the differential rotation of the induced swirling velocity produces a cascade of velocity fluctuations to smaller scale. The Kolmogorov energy spectrum is a result of this model.

  20. Transforming an educational virtual reality simulation into a work of fine art.

    Science.gov (United States)

    Panaiotis; Addison, Laura; Vergara, Víctor M; Hakamata, Takeshi; Alverson, Dale C; Saiki, Stanley M; Caudell, Thomas Preston

    2008-01-01

    This paper outlines user interface and interaction issues, technical considerations, and problems encountered in transforming an educational VR simulation of a reified kidney nephron into an interactive artwork appropriate for a fine arts museum.

  1. 3D visualization of ultra-fine ICON climate simulation data

    Science.gov (United States)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  2. Simulating Fine-Scale Marine Pollution Plumes for Autonomous Robotic Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Muhammad Fahad

    2018-05-01

    Full Text Available Marine plumes exhibit characteristics such as intermittency, sinuous structure, shape and flow field coherency, and a time varying concentration profile. Due to the lack of experimental quantification of these characteristics for marine plumes, existing work often assumes marine plumes exhibit behavior similar to aerial plumes and are commonly modeled by filament based Lagrangian models. Our previous field experiments with Rhodamine dye plumes at Makai Research Pier at Oahu, Hawaii revealed that marine plumes show similar characteristics to aerial plumes qualitatively, but quantitatively they are disparate. Based on the field data collected, this paper presents a calibrated Eulerian plume model that exhibits the qualitative and quantitative characteristics exhibited by experimentally generated marine plumes. We propose a modified model with an intermittent source, and implement it in a Robot Operating System (ROS based simulator. Concentration time series of stationary sampling points and dynamic sampling points across cross-sections and plume fronts are collected and analyzed for statistical parameters of the simulated plume. These parameters are then compared with statistical parameters from experimentally generated plumes. The comparison validates that the simulated plumes exhibit fine-scale qualitative and quantitative characteristics similar to experimental plumes. The ROS plume simulator facilitates future evaluations of environmental monitoring strategies by marine robots, and is made available for community use.

  3. A Modelling Approach on Fine Particle Spatial Distribution for Street Canyons in Asian Residential Community

    Science.gov (United States)

    Ling, Hong; Lung, Shih-Chun Candice; Uhrner, Ulrich

    2016-04-01

    Rapidly increasing urban pollution poses severe health risks.Especially fine particles pollution is considered to be closely related to respiratory and cardiovascular disease. In this work, ambient fine particles are studied in street canyons of a typical Asian residential community using a computational fluid dynamics (CFD) dispersion modelling approach. The community is characterised by an artery road with a busy traffic flow of about 4000 light vehicles (mainly cars and motorcycles) per hour at rush hours, three streets with hundreds light vehicles per hour at rush hours and several small lanes with less traffic. The objective is to study the spatial distribution of the ambient fine particle concentrations within micro-environments, in order to assess fine particle exposure of the people living in the community. The GRAL modelling system is used to simulate and assess the emission and dispersion of the traffic-related fine particles within the community. Traffic emission factors and traffic situation is assigned using both field observation and local emissions inventory data. High resolution digital elevation data (DEM) and building height data are used to resolve the topographical features. Air quality monitoring and mobile monitoring within the community is used to validate the simulation results. By using this modelling approach, the dispersion of fine particles in street canyons is simulated; the impact of wind condition and street orientation are investigated; the contributions of car and motorcycle emissions are quantified respectively; the residents' exposure level of fine particles is assessed. The study is funded by "Taiwan Megacity Environmental Research (II)-chemistry and environmental impacts of boundary layer aerosols (Year 2-3) (103-2111-M-001-001-); Spatial variability and organic markers of aerosols (Year 3)(104-2111-M-001 -005 -)"

  4. Statistical modelling of fine red wine production

    Directory of Open Access Journals (Sweden)

    María Rosa Castro

    2010-01-01

    Full Text Available Producing wine is a very important economic activity in the province of San Juan in Argentina; it is therefore most important to predict production regarding the quantity of raw material needed. This work was aimed at obtaining a model relating kilograms of crushed grape to the litres of wine so produced. Such model will be used for predicting precise future values and confidence intervals for determined quantities of crushed grapes. Data from a vineyard in the province of San Juan was thus used in this work. The sampling coefficient of correlation was calculated and a dispersion diagram was then constructed; this indicated a li- neal relationship between the litres of wine obtained and the kilograms of crushed grape. Two lineal models were then adopted and variance analysis was carried out because the data came from normal populations having the same variance. The most appropriate model was obtained from this analysis; it was validated with experimental values, a good approach being obtained.

  5. Fine-root mortality rates in a temperate forest: Estimates using radiocarbon data and numerical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W.J.; Gaudinski, J.B.; Torn, M.S.; Joslin, J.D.; Hanson, P.J.

    2009-09-01

    We used an inadvertent whole-ecosystem {sup 14}C label at a temperate forest in Oak Ridge, Tennessee, USA to develop a model (Radix1.0) of fine-root dynamics. Radix simulates two live-root pools, two dead-root pools, non-normally distributed root mortality turnover times, a stored carbon (C) pool, and seasonal growth and respiration patterns. We applied Radix to analyze measurements from two root size classes (< 0.5 and 0.5-2.0 mm diameter) and three soil-depth increments (O horizon, 0-15 cm and 30-60 cm). Predicted live-root turnover times were < 1 yr and 10 yr for short- and long-lived pools, respectively. Dead-root pools had decomposition turnover times of 2 yr and 10 yr. Realistic characterization of C flows through fine roots requires a model with two live fine-root populations, two dead fine-root pools, and root respiration. These are the first fine-root turnover time estimates that take into account respiration, storage, seasonal growth patterns, and non-normal turnover time distributions. The presence of a root population with decadal turnover times implies a lower amount of belowground net primary production used to grow fine-root tissue than is currently predicted by models with a single annual turnover pool.

  6. Monte Carlo Simulations of Electron Energy-Loss Spectra with the Addition of Fine Structure from Density Functional Theory Calculations.

    Science.gov (United States)

    Attarian Shandiz, Mohammad; Guinel, Maxime J-F; Ahmadi, Majid; Gauvin, Raynald

    2016-02-01

    A new approach is presented to introduce the fine structure of core-loss excitations into the electron energy-loss spectra of ionization edges by Monte Carlo simulations based on an optical oscillator model. The optical oscillator strength is refined using the calculated electron energy-loss near-edge structure by density functional theory calculations. This approach can predict the effects of multiple scattering and thickness on the fine structure of ionization edges. In addition, effects of the fitting range for background removal and the integration range under the ionization edge on signal-to-noise ratio are investigated.

  7. Rosé Wine Fining Using Polyvinylpolypyrrolidone: Colorimetry, Targeted Polyphenomics, and Molecular Dynamics Simulations.

    Science.gov (United States)

    Gil, Mélodie; Avila-Salas, Fabian; Santos, Leonardo S; Iturmendi, Nerea; Moine, Virginie; Cheynier, Véronique; Saucier, Cédric

    2017-12-06

    Polyvinylpolypyrrolidone (PVPP) is a fining agent polymer used in winemaking to adjust rosé wine color and to prevent organoleptic degradations by reducing polyphenol content. The impact of this polymer on color parameters and polyphenols of rosé wines was investigated, and the binding specificity of polyphenols toward PVPP was determined. Color measured by colorimetry decreased after treatment, thus confirming the adsorption of anthocyanins and other pigments. Phenolic composition was determined before and after fining by targeted polyphenomics (Ultra Performance Liquid Chromatography (UPLC)-Electrospray Ionization(ESI)-Mass Spectrometry (MS/MS)). MS analysis showed adsorption differences among polyphenol families. Flavonols (42%) and flavanols (64%) were the most affected. Anthocyanins were not strongly adsorbed on average (12%), but a specific adsorption of coumaroylated anthocyanins was observed (37%). Intermolecular interactions were also studied using molecular dynamics simulations. Relative adsorptions of flavanols were correlated with the calculated interaction energies. The specific affinity of coumaroylated anthocyanins toward PVPP was also well explained by the molecular modeling.

  8. Micromagnetic simulation of thermally activated switching in fine particles

    International Nuclear Information System (INIS)

    Scholz, Werner; Schrefl, Thomas; Fidler, J.

    2001-01-01

    Effects of thermal activation are included in micromagnetic simulations by adding a random thermal field to the effective magnetic field. As a result, the Landau-Lifshitz equation is converted into a stochastic differential equation of Langevin type with multiplicative noise. The Stratonovich interpretation of the stochastic Landau-Lifshitz equation leads to the correct thermal equilibrium properties. The proper generalization of Taylor expansions to stochastic calculus gives suitable time integration schemes. For a single rigid magnetic moment the thermal equilibrium properties are investigated. It is found, that the Heun scheme is a good compromise between numerical stability and computational complexity. Small cubic and spherical ferromagnetic particles are studied

  9. Fine modeling of energy exchanges between buildings and urban atmosphere

    International Nuclear Information System (INIS)

    Daviau-Pellegrin, Noelie

    2016-01-01

    This thesis work is about the effect of buildings on the urban atmosphere and more precisely the energetic exchanges that take place between these two systems. In order to model more finely the thermal effects of buildings on the atmospheric flows in simulations run under the CFD software Code-Saturne, we proceed to couple this tool with the building model BuildSysPro. This library is run under Dymola and can generate matrices describing the building thermal properties that can be used outside this software. In order to carry out the coupling, we use these matrices in a code that allows the building thermal calculations and the CFD to exchange their results. After a review about the physical phenomena and the existing models, we explain the interactions between the atmosphere and the urban elements, especially buildings. The latter can impact the air flows dynamically, as they act as obstacles, and thermally, through their surface temperatures. At first, we analyse the data obtained from the measurement campaign EM2PAU that we use in order to validate the coupled model. EM2PAU was carried out in Nantes in 2011 and represents a canyon street with two rows of four containers. Its distinctive feature lies in the simultaneous measurements of the air and wall temperatures as well as the wind speeds with anemometers located on a 10 m-high mast for the reference wind and on six locations in the canyon. This aims for studying the thermal influence of buildings on the air flows. Then the numerical simulations of the air flows in EM2PAU is carried out with different methods that allow us to calculate or impose the surface temperature we use for each of the container walls. The first method consists in imposing their temperatures from the measurements. For each wall, we set the temperature to the surface temperature that was measured during the EM2PAU campaign. The second method involves imposing the outdoor air temperature that was measured at a given time to all the

  10. Evaluation of global fine-resolution precipitation products and their uncertainty quantification in ensemble discharge simulations

    Science.gov (United States)

    Qi, W.; Zhang, C.; Fu, G.; Sweetapple, C.; Zhou, H.

    2016-02-01

    The applicability of six fine-resolution precipitation products, including precipitation radar, infrared, microwave and gauge-based products, using different precipitation computation recipes, is evaluated using statistical and hydrological methods in northeastern China. In addition, a framework quantifying uncertainty contributions of precipitation products, hydrological models, and their interactions to uncertainties in ensemble discharges is proposed. The investigated precipitation products are Tropical Rainfall Measuring Mission (TRMM) products (TRMM3B42 and TRMM3B42RT), Global Land Data Assimilation System (GLDAS)/Noah, Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and a Global Satellite Mapping of Precipitation (GSMAP-MVK+) product. Two hydrological models of different complexities, i.e. a water and energy budget-based distributed hydrological model and a physically based semi-distributed hydrological model, are employed to investigate the influence of hydrological models on simulated discharges. Results show APHRODITE has high accuracy at a monthly scale compared with other products, and GSMAP-MVK+ shows huge advantage and is better than TRMM3B42 in relative bias (RB), Nash-Sutcliffe coefficient of efficiency (NSE), root mean square error (RMSE), correlation coefficient (CC), false alarm ratio, and critical success index. These findings could be very useful for validation, refinement, and future development of satellite-based products (e.g. NASA Global Precipitation Measurement). Although large uncertainty exists in heavy precipitation, hydrological models contribute most of the uncertainty in extreme discharges. Interactions between precipitation products and hydrological models can have the similar magnitude of contribution to discharge uncertainty as the hydrological models. A

  11. Simple Model with Time-Varying Fine-Structure ``Constant''

    Science.gov (United States)

    Berman, M. S.

    2009-10-01

    Extending the original version written in colaboration with L.A. Trevisan, we study the generalisation of Dirac's LNH, so that time-variation of the fine-structure constant, due to varying electrical and magnetic permittivities is included along with other variations (cosmological and gravitational ``constants''), etc. We consider the present Universe, and also an inflationary scenario. Rotation of the Universe is a given possibility in this model.

  12. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  13. Simulations of fine structures on the zero field steps of Josephson tunnel junctions

    DEFF Research Database (Denmark)

    Scheuermann, M.; Chi, C. C.; Pedersen, Niels Falsig

    1986-01-01

    Fine structures on the zero field steps of long Josephson tunnel junctions are simulated for junctions with the bias current injected into the junction at the edges. These structures are due to the coupling between self-generated plasma oscillations and the traveling fluxon. The plasma oscillations...... are generated by the interaction of the bias current with the fluxon at the junction edges. On the first zero field step, the voltages of successive fine structures are given by Vn=[h-bar]/2e(2omegap/n), where n is an even integer. Applied Physics Letters is copyrighted by The American Institute of Physics....

  14. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  15. An innovative computer design for modeling forest landscape change in very large spatial extents with fine resolutions

    Science.gov (United States)

    Jian Yang; Hong S. He; Stephen R. Shifley; Frank R. Thompson; Yangjian. Zhang

    2011-01-01

    Although forest landscape models (FLMs) have benefited greatly from ongoing advances of computer technology and software engineering, computing capacity remains a bottleneck in the design and development of FLMs. Computer memory overhead and run time efficiency are primary limiting factors when applying forest landscape models to simulate large landscapes with fine...

  16. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  17. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  18. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  19. Numerical modeling of fine particle fractal aggregates in turbulent flow

    Directory of Open Access Journals (Sweden)

    Cao Feifeng

    2015-01-01

    Full Text Available A method for prediction of fine particle transport in a turbulent flow is proposed, the interaction between particles and fluid is studied numerically, and fractal agglomerate of fine particles is analyzed using Taylor-expansion moment method. The paper provides a better understanding of fine particle dynamics in the evolved flows.

  20. Quantum Big Bang without fine-tuning in a toy-model

    International Nuclear Information System (INIS)

    Znojil, Miloslav

    2012-01-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = t c . This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = t c .

  1. Quantum Big Bang without fine-tuning in a toy-model

    Science.gov (United States)

    Znojil, Miloslav

    2012-02-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = tc. This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = tc.

  2. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  3. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  4. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  5. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  6. Consolidation and atmospheric drying of fine oil sand tailings : Comparison of blind simulations and field scale results

    NARCIS (Netherlands)

    Vardon, P.J.; Yao, Y.; van Paassen, L.A.; van Tol, A.F.; Sego, D.C.; Wilson, G.W.; Beier, N.A.

    2016-01-01

    This paper presents a comparison between blind predictions of field tests of atmospheric drying of mature fine tailings (MFT) presented in IOSTC 2014 and field results. The numerical simulation of the consolidation and atmospheric drying of selfweight consolidating fine material is challenging and

  7. Network model for fine coal dewatering. Part I. The model

    Energy Technology Data Exchange (ETDEWEB)

    Qamar, I.; Tierney, J.W.; Chiang, S.H.

    1985-08-01

    There is a body of well established research in filtration and related subjects, but much of it has been empirical - based on correlations from experimental data. This approach has the disadvantage that it lacks generality, and it is difficult to predict the behavior of new or different systems. A more general method for studying dewatering is needed-one which will include the microscopic characteristics of the filter cake, which, like other porous media, contains a complicated network of interconnected pores through which the fluid must flow. These pores play an important role in dewatering because they give rise to capillary forces when one fluid is displacing another. In this report, we describe a network model which we believe satisfies these requirements. In the main body of this report, the model is described in detail. Background information is given where appropriate, and a brief description is given of the experimental work being done in our laboratories to verify the model. A detailed description of the experimental procedures and results is given in other DOE reports. The computer programs which are needed to solve the model are described in detail in the Appendices and are accompanied by flow charts, sample problems, and sample outputs. Sufficient detail is given in order to use the model programs on other computer systems. 32 refs., 7 figs., 5 tabs.

  8. [Evaluation of Cellular Effects Caused by Lunar Regolith Simulant Including Fine Particles].

    Science.gov (United States)

    Horie, Masanori; Miki, Takeo; Honma, Yoshiyuki; Aoki, Shigeru; Morimoto, Yasuo

    2015-06-01

    The National Aeronautics and Space Administration has announced a plan to establish a manned colony on the surface of the moon, and our country, Japan, has declared its participation. The surface of the moon is covered with soil called lunar regolith, which includes fine particles. It is possible that humans will inhale lunar regolith if it is brought into the spaceship. Therefore, an evaluation of the pulmonary effects caused by lunar regolith is important for exploration of the moon. In the present study, we examine the cellular effects of lunar regolith simulant, whose components are similar to those of lunar regolith. We focused on the chemical component and particle size in particular. The regolith simulant was fractionated to lunar regolith simulant such as cell membrane damage, induction of oxidative stress and proinflammatory effect.

  9. Cobra-IE Evaluation by Simulation of the NUPEC BWR Full-Size Fine-Mesh Bundle Test (BFBT)

    International Nuclear Information System (INIS)

    Burns, C. J.; Aumiler, D.L.

    2006-01-01

    The COBRA-IE computer code is a thermal-hydraulic subchannel analysis program capable of simulating phenomena present in both PWRs and BWRs. As part of ongoing COBRA-IE assessment efforts, the code has been evaluated against experimental data from the NUPEC BWR Full-Size Fine-Mesh Bundle Tests (BFBT). The BFBT experiments utilized an 8 x 8 rod bundle to simulate BWR operating conditions and power profiles, providing an excellent database for investigation of the capabilities of the code. Benchmarks performed included steady-state and transient void distribution, single-phase and two-phase pressure drop, and steady-state and transient critical power measurements. COBRA-IE effectively captured the trends seen in the experimental data with acceptable prediction error. Future sensitivity studies are planned to investigate the effects of enabling and/or modifying optional code models dealing with void drift, turbulent mixing, rewetting, and CHF

  10. Models and simulations

    International Nuclear Information System (INIS)

    Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.

    1983-09-01

    On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example

  11. PSH Transient Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-21

    PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.

  12. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  13. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  14. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  15. Precision tests and fine tuning in twin Higgs models

    Science.gov (United States)

    Contino, Roberto; Greco, Davide; Mahbubani, Rakhi; Rattazzi, Riccardo; Torre, Riccardo

    2017-11-01

    We analyze the parametric structure of twin Higgs (TH) theories and assess the gain in fine tuning which they enable compared to extensions of the standard model with colored top partners. Estimates show that, at least in the simplest realizations of the TH idea, the separation between the mass of new colored particles and the electroweak scale is controlled by the coupling strength of the underlying UV theory, and that a parametric gain is achieved only for strongly-coupled dynamics. Motivated by this consideration we focus on one of these simple realizations, namely composite TH theories, and study how well such constructions can reproduce electroweak precision data. The most important effect of the twin states is found to be the infrared contribution to the Higgs quartic coupling, while direct corrections to electroweak observables are subleading and negligible. We perform a careful fit to the electroweak data including the leading-logarithmic corrections to the Higgs quartic up to three loops. Our analysis shows that agreement with electroweak precision tests can be achieved with only a moderate amount of tuning, in the range 5%-10%, in theories where colored states have mass of order 3-5 TeV and are thus out of reach of the LHC. For these levels of tuning, larger masses are excluded by a perturbativity bound, which makes these theories possibly discoverable, hence falsifiable, at a future 100 TeV collider.

  16. Mathematical modeling of atmospheric fine particle-associated primary organic compound concentrations

    Science.gov (United States)

    Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    1996-08-01

    An atmospheric transport model has been used to explore the relationship between source emissions and ambient air quality for individual particle phase organic compounds present in primary aerosol source emissions. An inventory of fine particulate organic compound emissions was assembled for the Los Angeles area in the year 1982. Sources characterized included noncatalyst- and catalyst-equipped autos, diesel trucks, paved road dust, tire wear, brake lining dust, meat cooking operations, industrial oil-fired boilers, roofing tar pots, natural gas combustion in residential homes, cigarette smoke, fireplaces burning oak and pine wood, and plant leaf abrasion products. These primary fine particle source emissions were supplied to a computer-based model that simulates atmospheric transport, dispersion, and dry deposition based on the time series of hourly wind observations and mixing depths. Monthly average fine particle organic compound concentrations that would prevail if the primary organic aerosol were transported without chemical reaction were computed for more than 100 organic compounds within an 80 km × 80 km modeling area centered over Los Angeles. The monthly average compound concentrations predicted by the transport model were compared to atmospheric measurements made at monitoring sites within the study area during 1982. The predicted seasonal variation and absolute values of the concentrations of the more stable compounds are found to be in reasonable agreement with the ambient observations. While model predictions for the higher molecular weight polycyclic aromatic hydrocarbons (PAH) are in agreement with ambient observations, lower molecular weight PAH show much higher predicted than measured atmospheric concentrations in the particle phase, indicating atmospheric decay by chemical reactions or evaporation from the particle phase. The atmospheric concentrations of dicarboxylic acids and aromatic polycarboxylic acids greatly exceed the contributions that

  17. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  18. A reanalysis of MODIS fine mode fraction over ocean using OMI and daily GOCART simulations

    Directory of Open Access Journals (Sweden)

    T. A. Jones

    2011-06-01

    Full Text Available Using daily Goddard Chemistry Aerosol Radiation and Transport (GOCART model simulations and columnar retrievals of 0.55 μm aerosol optical thickness (AOT and fine mode fraction (FMF from the Moderate Resolution Imaging Spectroradiometer (MODIS, we estimate the satellite-derived aerosol properties over the global oceans between June 2006 and May 2007 due to black carbon (BC, organic carbon (OC, dust (DU, sea-salt (SS, and sulfate (SU components. Using Aqua-MODIS aerosol properties embedded in the CERES-SSF product, we find that the mean MODIS FMF values for each aerosol type are SS: 0.31 ± 0.09, DU: 0.49 ± 0.13, SU: 0.77 ± 0.16, and (BC + OC: 0.80 ± 0.16. We further combine information from the ultraviolet spectrum using the Ozone Monitoring Instrument (OMI onboard the Aura satellite to improve the classification process, since dust and carbonate aerosols have positive Aerosol Index (AI values >0.5 while other aerosol types have near zero values. By combining MODIS and OMI datasets, we were able to identify and remove data in the SU, OC, and BC regions that were not associated with those aerosol types.

    The same methods used to estimate aerosol size characteristics from MODIS data within the CERES-SSF product were applied to Level 2 (L2 MODIS aerosol data from both Terra and Aqua satellites for the same time period. As expected, FMF estimates from L2 Aqua data agreed well with the CERES-SSF dataset from Aqua. However, the FMF estimate for DU from Terra data was significantly lower (0.37 vs. 0.49 indicating that sensor calibration, sampling differences, and/or diurnal changes in DU aerosol size characteristics were occurring. Differences for other aerosol types were generally smaller. Sensitivity studies show that a difference of 0.1 in the estimate of the anthropogenic component of FMF produces a corresponding change of 0.2 in the anthropogenic component of AOT (assuming a unit value of AOT. This uncertainty would then be passed

  19. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  20. Wake modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Aagaard Madsen, H.; Larsen, T.J.; Troldborg, N.

    2008-07-15

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture a stochastic model of the downstream wake meandering is formulated. In addition to the kinematic formulation of the dynamics of the 'meandering frame of reference', models characterizing the mean wake deficit as well as the added wake turbulence, described in the meandering frame of reference, are an integrated part the DWM model complex. For design applications, the computational efficiency of wake deficit prediction is a key issue. A computationally low cost model is developed for this purpose. Likewise, the character of the added wake turbulence, generated by the up-stream turbine in the form of shed and trailed vorticity, has been approached by a simple semi-empirical model essentially based on an eddy viscosity philosophy. Contrary to previous attempts to model wake loading, the DWM approach opens for a unifying description in the sense that turbine power- and load aspects can be treated simultaneously. This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated modeling tool, the DWM methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjaereborg wind farm, have

  1. Comparative effects of simulated acid rain of different ratios of SO42- to NO3- on fine root in subtropical plantation of China.

    Science.gov (United States)

    Liu, Xin; Zhao, Wenrui; Meng, Miaojing; Fu, Zhiyuan; Xu, Linhao; Zha, Yan; Yue, Jianmin; Zhang, Shuifeng; Zhang, Jinchi

    2018-03-15

    The influence of acid rain on forest trees includes direct effects on foliage as well as indirect soil-mediated effects that cause a reduction in fine-root growth. In addition, the concentration of NO 3 - in acid rain increases with the rapidly growing of nitrogen deposition. In this study, we investigated the impact of simulated acid rain with different SO 4 2- /NO 3 - (S/N) ratios, which were 5:1 (S), 1:1 (SN) and 1:5 (N), on fine-root growth from March 2015 to February 2016. Results showed that fine roots were more sensitive to the effects of acid rain than soils in the short-term. Both soil pH and fine root biomass (FRB) significantly decreased as acid rain pH decreased, and also decreased with the percentage of NO 3 - increased in acid rain. Acid rain pH significantly influenced soil total carbon and available potassium in summer. Higher acidity level (pH=2.5), especially of the N treatments, had the strongest inhibitory impact on soil microbial activity after summer. The structural equation modelling results showed that acid rain S/N ratio and pH had stronger direct effects on FRB than indirect effects via changed soil and fine root properties. Fine-root element contents and antioxidant enzymes activities were significantly affected by acid rain S/N ratio and pH during most seasons. Fine-root Al ion content, Ca/Al, Mg/Al ratios and catalase activity were used as better indicators than soil parameters for evaluating the effects of different acid rain S/N ratios and pH on forests. Our results suggest that the ratio of SO 4 2- to NO 3 - in acid rain is an important factor which could affect fine-root growth in subtropical forests of China. Copyright © 2017. Published by Elsevier B.V.

  2. Facilitating Fine Grained Data Provenance using Temporal Data Model

    NARCIS (Netherlands)

    Huq, M.R.; Wombacher, Andreas; Apers, Peter M.G.

    2010-01-01

    E-science applications use fine grained data provenance to maintain the reproducibility of scientific results, i.e., for each processed data tuple, the source data used to process the tuple as well as the used approach is documented. Since most of the e-science applications perform on-line

  3. Biomolecular modelling and simulations

    CERN Document Server

    Karabencheva-Christova, Tatyana

    2014-01-01

    Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.

  4. Numerical modelling of the dehydration of waste concrete fines : An attempt to close the recycling loop

    NARCIS (Netherlands)

    Teklay, Abraham; Vahidi, A.; Lotfi, Somayeh; Di Maio, F.; Rem, P.C.; Di Maio, F.; Lotfi, S.; Bakker, M.; Hu, M.; Vahidi, A.

    2017-01-01

    The ever-increasing interest on sustainable raw materials has urged the quest for recycled materials that can be used as a partial or total replacement of fine fractions in the production of concrete. This paper demonstrates a modelling study of recycled concrete waste fines and the possibility of

  5. Numerical simulation of fine oil sand tailings drying in test cells

    NARCIS (Netherlands)

    Vardon, P.J.; Nijssen, T.; Yao, Y.; Van Tol, A.F.

    2014-01-01

    As a promising technology in disposal of mature fine tailings (MFT), atmospheric fines drying (AFD) is currently being implemented on a commercial scale at Shell Canada’s Muskeg River Mine near Fort McMurray, Alberta. AFD involves the use of a polymer flocculent to bind fine particles in MFT

  6. Simulation of fine organic aerosols in the western Mediterranean area during the ChArMEx 2013 summer campaign

    Science.gov (United States)

    Cholakian, Arineh; Beekmann, Matthias; Colette, Augustin; Coll, Isabelle; Siour, Guillaume; Sciare, Jean; Marchand, Nicolas; Couvidat, Florian; Pey, Jorge; Gros, Valerie; Sauvage, Stéphane; Michoud, Vincent; Sellegri, Karine; Colomb, Aurélie; Sartelet, Karine; Langley DeWitt, Helen; Elser, Miriam; Prévot, André S. H.; Szidat, Sonke; Dulac, François

    2018-05-01

    The simulation of fine organic aerosols with CTMs (chemistry-transport models) in the western Mediterranean basin has not been studied until recently. The ChArMEx (the Chemistry-Aerosol Mediterranean Experiment) SOP 1b (Special Observation Period 1b) intensive field campaign in summer of 2013 gathered a large and comprehensive data set of observations, allowing the study of different aspects of the Mediterranean atmosphere including the formation of organic aerosols (OAs) in 3-D models. In this study, we used the CHIMERE CTM to perform simulations for the duration of the SAFMED (Secondary Aerosol Formation in the MEDiterranean) period (July to August 2013) of this campaign. In particular, we evaluated four schemes for the simulation of OA, including the CHIMERE standard scheme, the VBS (volatility basis set) standard scheme with two parameterizations including aging of biogenic secondary OA, and a modified version of the VBS scheme which includes fragmentation and formation of nonvolatile OA. The results from these four schemes are compared to observations at two stations in the western Mediterranean basin, located on Ersa, Cap Corse (Corsica, France), and at Cap Es Pinar (Mallorca, Spain). These observations include OA mass concentration, PMF (positive matrix factorization) results of different OA fractions, and 14C observations showing the fossil or nonfossil origins of carbonaceous particles. Because of the complex orography of the Ersa site, an original method for calculating an orographic representativeness error (ORE) has been developed. It is concluded that the modified VBS scheme is close to observations in all three aspects mentioned above; the standard VBS scheme without BSOA (biogenic secondary organic aerosol) aging also has a satisfactory performance in simulating the mass concentration of OA, but not for the source origin analysis comparisons. In addition, the OA sources over the western Mediterranean basin are explored. OA shows a major biogenic

  7. Constructing a consumption model of fine dining from the perspective of behavioral economics.

    Science.gov (United States)

    Hsu, Sheng-Hsun; Hsiao, Cheng-Fu; Tsai, Sang-Bing

    2018-01-01

    Numerous factors affect how people choose a fine dining restaurant, including food quality, service quality, food safety, and hedonic value. A conceptual framework for evaluating restaurant selection behavior has not yet been developed. This study surveyed 150 individuals with fine dining experience and proposed the use of mental accounting and axiomatic design to construct a consumer economic behavior model. Linear and logistic regressions were employed to determine model correlations and the probability of each factor affecting behavior. The most crucial factor was food quality, followed by service and dining motivation, particularly regarding family dining. Safe ingredients, high cooking standards, and menu innovation all increased the likelihood of consumers choosing fine dining restaurants.

  8. GIS Modeling of Solar Neighborhood Potential at a Fine Spatiotemporal Resolution

    Directory of Open Access Journals (Sweden)

    Annie Chow

    2014-05-01

    Full Text Available This research presents a 3D geographic information systems (GIS modeling approach at a fine spatiotemporal resolution to assess solar potential for the development of smart net-zero energy communities. It is important to be able to accurately identify the key areas on the facades and rooftops of buildings that receive maximum solar radiation, in order to prevent losses in solar gain due to obstructions from surrounding buildings and topographic features. A model was created in ArcGIS, in order to efficiently compute and iterate the hourly solar modeling and mapping process over a simulated year. The methodology was tested on a case study area located in southern Ontario, where two different 3D models of the site plan were analyzed. The accuracy of the work depends on the resolution and sky size of the input model. Future work is needed in order to create an efficient iterative function to speed the extraction process of the pixelated solar radiation data.

  9. Incorporation of Fine-Grained Sediment Erodibility Measurements into Sediment Transport Modeling, Capitol Lake, Washington

    Science.gov (United States)

    Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig

    2008-01-01

    Capitol Lake was created in 1951 with the construction of a concrete dam and control gate that prevented salt-water intrusion into the newly formed lake and regulated flow of the Deschutes River into southern Puget Sound. Physical processes associated with the former tidally dominated estuary were altered, and the dam structure itself likely caused an increase in retention of sediment flowing into the lake from the Deschutes River. Several efforts to manage sediment accumulation in the lake, including dredging and the construction of sediment traps upriver, failed to stop the lake from filling with sediment. The Deschutes Estuary Feasibility Study (DEFS) was carried out to evaluate the possibility of removing the dam and restoring estuarine processes as an alternative ongoing lake management. An important component of DEFS was the creation of a hydrodynamic and sediment transport model of the restored Deschutes Estuary. Results from model simulations indicated that estuarine processes would be restored under each of four restoration alternatives, and that over time, the restored estuary would have morphological features similar to the predam estuary. The model also predicted that after dam-removal, a large portion of the sediment eroded from the lake bottom would be deposited near the Port of Olympia and a marina located in lower Budd Inlet seaward of the present dam. The volume of sediment transported downstream was a critical piece of information that managers needed to estimate the total cost of the proposed restoration project. However, the ability of the model to predict the magnitude of sediment transport in general and, in particular, the volume of sediment deposition in the port and marina was limited by a lack of information on the erodibility of fine-grained sediments in Capitol Lake. Cores at several sites throughout Capitol Lake were collected between October 31 and November 1, 2007. The erodibility of sediments in the cores was later determined in the

  10. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  11. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  12. Photoionization modeling of the LWS fine-structure lines in IR bright galaxies

    Science.gov (United States)

    Satyapal, S.; Luhman, M. L.; Fischer, J.; Greenhouse, M. A.; Wolfire, M. G.

    1997-01-01

    The long wavelength spectrometer (LWS) fine structure line spectra from infrared luminous galaxies were modeled using stellar evolutionary synthesis models combined with photoionization and photodissociation region models. The calculations were carried out by using the computational code CLOUDY. Starburst and active galactic nuclei models are presented. The effects of dust in the ionized region are examined.

  13. Setup of a Parameterized FE Model for the Die Roll Prediction in Fine Blanking using Artificial Neural Networks

    Science.gov (United States)

    Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.

    2017-09-01

    Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.

  14. PROPERTIES AND MODELING OF UNRESOLVED FINE STRUCTURE LOOPS OBSERVED IN THE SOLAR TRANSITION REGION BY IRIS

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, David H. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States); Reep, Jeffrey W.; Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States)

    2016-08-01

    Recent observations from the Interface Region Imaging Spectrograph ( IRIS ) have discovered a new class of numerous low-lying dynamic loop structures, and it has been argued that they are the long-postulated unresolved fine structures (UFSs) that dominate the emission of the solar transition region. In this letter, we combine IRIS measurements of the properties of a sample of 108 UFSs (intensities, lengths, widths, lifetimes) with one-dimensional non-equilibrium ionization simulations, using the HYDRAD hydrodynamic model to examine whether the UFSs are now truly spatially resolved in the sense of being individual structures rather than being composed of multiple magnetic threads. We find that a simulation of an impulsively heated single strand can reproduce most of the observed properties, suggesting that the UFSs may be resolved, and the distribution of UFS widths implies that they are structured on a spatial scale of 133 km on average. Spatial scales of a few hundred kilometers appear to be typical for a range of chromospheric and coronal structures, and we conjecture that this could be an important clue for understanding the coronal heating process.

  15. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  16. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  17. Implications for new physics from fine-tuning arguments: II. Little Higgs models

    International Nuclear Information System (INIS)

    Casas, J.A.; Espinosa, J.R.; Hidalgo, I.

    2005-01-01

    We examine the fine-tuning associated to electroweak breaking in Little Higgs scenarios and find it to be always substantial and, generically, much higher than suggested by the rough estimates usually made. This is due to implicit tunings between parameters that can be overlooked at first glance but show up in a more systematic analysis. Focusing on four popular and representative Little Higgs scenarios, we find that the fine-tuning is essentially comparable to that of the Little Hierarchy problem of the Standard Model (which these scenarios attempt to solve) and higher than in supersymmetric models. This does not demonstrate that all Little Higgs models are fine-tuned, but stresses the need of a careful analysis of this issue in model-building before claiming that a particular model is not fine-tuned. In this respect we identify the main sources of potential fine-tuning that should be watched out for, in order to construct a successful Little Higgs model, which seems to be a non-trivial goal. (author)

  18. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  19. Mathematical and Computational Aspects Related to Soil Modeling and Simulation

    Science.gov (United States)

    2017-09-26

    and simulation challenges at the interface of applied math (homogenization, handling of discontinuous behavior, discrete vs. continuum representations...topics: a) Visco-elasto-plastic continuum models of geo-surface materials b) Discrete models of geo-surface materials (rocks/gravel/sand) c) Mixed...continuum- discrete representations. Coarse-graining and fine-graining mathematical formulations d) Multi-physics aspects related to the modeling of

  20. A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS

    International Nuclear Information System (INIS)

    Clifford, I; Ivanov, K; Avramova, M.

    2011-01-01

    Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)

  1. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  2. Demonstrating the Uneven Importance of Fine-Scale Forest Structure on Snow Distributions using High Resolution Modeling

    Science.gov (United States)

    Broxton, P. D.; Harpold, A. A.; van Leeuwen, W.; Biederman, J. A.

    2016-12-01

    Quantifying the amount of snow in forested mountainous environments, as well as how it may change due to warming and forest disturbance, is critical given its importance for water supply and ecosystem health. Forest canopies affect snow accumulation and ablation in ways that are difficult to observe and model. Furthermore, fine-scale forest structure can accentuate or diminish the effects of forest-snow interactions. Despite decades of research demonstrating the importance of fine-scale forest structure (e.g. canopy edges and gaps) on snow, we still lack a comprehensive understanding of where and when forest structure has the largest impact on snowpack mass and energy budgets. Here, we use a hyper-resolution (1 meter spatial resolution) mass and energy balance snow model called the Snow Physics and Laser Mapping (SnowPALM) model along with LIDAR-derived forest structure to determine where spatial variability of fine-scale forest structure has the largest influence on large scale mass and energy budgets. SnowPALM was set up and calibrated at sites representing diverse climates in New Mexico, Arizona, and California. Then, we compared simulations at different model resolutions (i.e. 1, 10, and 100 m) to elucidate the effects of including versus not including information about fine scale canopy structure. These experiments were repeated for different prescribed topographies (i.e. flat, 30% slope north, and south-facing) at each site. Higher resolution simulations had more snow at lower canopy cover, with the opposite being true at high canopy cover. Furthermore, there is considerable scatter, indicating that different canopy arrangements can lead to different amounts of snow, even when the overall canopy coverage is the same. This modeling is contributing to the development of a high resolution machine learning algorithm called the Snow Water Artificial Network (SWANN) model to generate predictions of snow distributions over much larger domains, which has implications

  3. Impact of biogenic emission uncertainties on the simulated response of ozone and fine particulate matter to anthropogenic emission reductions.

    Science.gov (United States)

    Hogrefe, Christian; Isukapalli, Sastry S; Tang, Xiaogang; Georgopoulos, Panos G; He, Shan; Zalewsky, Eric E; Hao, Winston; Ku, Jia-Yeong; Key, Tonalee; Sistla, Gopal

    2011-01-01

    The role of emissions of volatile organic compounds and nitric oxide from biogenic sources is becoming increasingly important in regulatory air quality modeling as levels of anthropogenic emissions continue to decrease and stricter health-based air quality standards are being adopted. However, considerable uncertainties still exist in the current estimation methodologies for biogenic emissions. The impact of these uncertainties on ozone and fine particulate matter (PM2.5) levels for the eastern United States was studied, focusing on biogenic emissions estimates from two commonly used biogenic emission models, the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the Biogenic Emissions Inventory System (BEIS). Photochemical grid modeling simulations were performed for two scenarios: one reflecting present day conditions and the other reflecting a hypothetical future year with reductions in emissions of anthropogenic oxides of nitrogen (NOx). For ozone, the use of MEGAN emissions resulted in a higher ozone response to hypothetical anthropogenic NOx emission reductions compared with BEIS. Applying the current U.S. Environmental Protection Agency guidance on regulatory air quality modeling in conjunction with typical maximum ozone concentrations, the differences in estimated future year ozone design values (DVF) stemming from differences in biogenic emissions estimates were on the order of 4 parts per billion (ppb), corresponding to approximately 5% of the daily maximum 8-hr ozone National Ambient Air Quality Standard (NAAQS) of 75 ppb. For PM2.5, the differences were 0.1-0.25 microg/m3 in the summer total organic mass component of DVFs, corresponding to approximately 1-2% of the value of the annual PM2.5 NAAQS of 15 microg/m3. Spatial variations in the ozone and PM2.5 differences also reveal that the impacts of different biogenic emission estimates on ozone and PM2.5 levels are dependent on ambient levels of anthropogenic emissions.

  4. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  5. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    Science.gov (United States)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)1232.0.CO;2.

  6. Microbial and Organic Fine Particle Transport Dynamics in Streams - a Combined Experimental and Stochastic Modeling Approach

    Science.gov (United States)

    Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron

    2014-05-01

    Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling

  7. Model of the fine-grain component of martian soil based on Viking lander data

    International Nuclear Information System (INIS)

    Nussinov, M.D.; Chernyak, Y.B.; Ettinger, J.L.

    1978-01-01

    A model of the fine-grain component of the Martian soil is proposed. The model is based on well-known physical phenomena, and enables an explanation of the evolution of the gases released in the GEX (gas exchange experiments) and GCMS (gas chromatography-mass spectrometer experiments) of the Viking landers. (author)

  8. An effective anisotropic poroelastic model for elastic wave propagation in finely layered media

    NARCIS (Netherlands)

    Kudarova, A.; van Dalen, K.N.; Drijkoningen, G.G.

    2016-01-01

    Mesoscopic-scale heterogeneities in porous media cause attenuation and dispersion at seismic frequencies. Effective models are often used to account for this. We have developed a new effective poroelastic model for finely layered media, and we evaluated its impact focusing on the angledependent

  9. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  10. The standard model and the fine structure constant at Planck distances in Bennet-Brene-Nielsen-Picek random dynamics

    International Nuclear Information System (INIS)

    Laperashvili, L.V.

    1994-01-01

    An overview of papers by Nielson, Bennet, Brene, and Picek, forming the basis of the model called random dynamics, is given in the first part of this work. The fine structure constant is calculated in the second part of this work by using the technique of path integration in the U(1) lattice gauge theory. It is shown that α U(1),crit -1 ∼ 19.8. This value is in agreement with the prediction of random dynamics. The obtained results are compared with the results of Monte Carlo simulations. 20 refs., 3 figs., 1 tab

  11. Modeling of meteorology, chemistry and aerosol for the 2017 Utah Winter Fine Particle Study

    Science.gov (United States)

    McKeen, S. A.; Angevine, W. M.; McDonald, B.; Ahmadov, R.; Franchin, A.; Middlebrook, A. M.; Fibiger, D. L.; McDuffie, E. E.; Womack, C.; Brown, S. S.; Moravek, A.; Murphy, J. G.; Trainer, M.

    2017-12-01

    The Utah Winter Fine Particle Study (UWFPS-17) field project took place during January and February of 2017 within the populated region of the Great Salt Lake, Utah. The study focused on understanding the meteorology and chemistry associated with high particulate matter (PM) levels often observed near Salt Lake City during stable wintertime conditions. Detailed composition and meteorological observations were taken from the NOAA Twin-Otter aircraft and several surface sites during the study period, and extremely high aerosol conditions were encountered for two cold-pool episodes occurring in the last 2 weeks of January. A clear understanding of the photochemical and aerosol processes leading to these high PM events is still lacking. Here we present high spatiotemporal resolution simulations of meteorology, PM and chemistry over Utah from January 13 to February 1, 2017 using the WRF/Chem photochemical model. Correctly characterizing the meteorology is difficult due to the complex terrain and shallow inversion layers. We discuss the approach and limitations of the simulated meteorology, and evaluate low-level pollutant mixing using vertical profiles from missed airport approaches by the NOAA Twin-Otter performed routinely during each flight. Full photochemical simulations are calculated using NOx, ammonia and VOC emissions from the U.S. EPA NEI-2011 emissions inventory. Comparisons of the observed vertical column amounts of NOx, ammonia, aerosol nitrate and ammonium with model results shows the inventory estimates for ammonia emissions are low by a factor of four and NOx emissions are low by nearly a factor of two. The partitioning of both nitrate and NH3 between gas and particle phase depends strongly on the NH3 and NOx emissions to the model and calculated NOx to nitrate conversion rates. These rates are underestimated by gas-phase chemistry alone, even though surface snow albedo increases photolysis rates by nearly a factor of two. Several additional conversion

  12. Fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.E. (Purdue Univ., Lafayette, IN (USA). Dept. of Physics)

    1983-04-28

    We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes.

  13. The fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models

    International Nuclear Information System (INIS)

    Foda, O.E.

    1983-01-01

    We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes. (orig.)

  14. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  15. Aviation Model: A Fine-Scale Numerical Weather Prediction System for Aviation Applications at the Hong Kong International Airport

    Directory of Open Access Journals (Sweden)

    Wai-Kin Wong

    2013-01-01

    Full Text Available The Hong Kong Observatory (HKO is planning to implement a fine-resolution Numerical Weather Prediction (NWP model for supporting the aviation weather applications at the Hong Kong International Airport (HKIA. This new NWP model system, called Aviation Model (AVM, is configured at a horizontal grid spacing of 600 m and 200 m. It is based on the WRF-ARW (Advance Research WRF model that can have sufficient computation efficiency in order to produce hourly updated forecasts up to 9 hours ahead on a future high performance computer system with theoretical peak performance of around 10 TFLOPS. AVM will be nested inside the operational mesoscale NWP model of HKO with horizontal resolution of 2 km. In this paper, initial numerical experiment results in forecast of windshear events due to seabreeze and terrain effect are discussed. The simulation of sea-breeze-related windshear is quite successful, and the headwind change observed from flight data could be reproduced in the model forecast. Some impacts of physical processes on generating the fine-scale wind circulation and development of significant convection are illustrated. The paper also discusses the limitations in the current model setup and proposes methods for the future development of AVM.

  16. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  17. Constructing a consumption model of fine dining from the perspective of behavioral economics

    Science.gov (United States)

    Tsai, Sang-Bing

    2018-01-01

    Numerous factors affect how people choose a fine dining restaurant, including food quality, service quality, food safety, and hedonic value. A conceptual framework for evaluating restaurant selection behavior has not yet been developed. This study surveyed 150 individuals with fine dining experience and proposed the use of mental accounting and axiomatic design to construct a consumer economic behavior model. Linear and logistic regressions were employed to determine model correlations and the probability of each factor affecting behavior. The most crucial factor was food quality, followed by service and dining motivation, particularly regarding family dining. Safe ingredients, high cooking standards, and menu innovation all increased the likelihood of consumers choosing fine dining restaurants. PMID:29641554

  18. Fine numerical modelling of thermohydraulic phenomena in EDF PWR reactors

    International Nuclear Information System (INIS)

    Boulot, F.

    1993-01-01

    Over the last 20 years, EDF has developed a family of 2D and 3D industrial thermohydraulics software to solve problems encountered in existing PWR power plants and to design new reactors for the future. The equations used in the models are the averaged Navier-Stokes and energy equations. A brief description is given of the four main codes developed for single-phase and two-phase water-steam flows, some of which use finite differences or finite volumes methods, while others make use of finite elements methods. An example of application is given for each code. (author). 4 figs., 4 refs

  19. The fine-tuning cost of the likelihood in SUSY models

    International Nuclear Information System (INIS)

    Ghilencea, D.M.; Ross, G.G.

    2013-01-01

    In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γ i ={m 0 ,m 1/2 ,μ 0 ,A 0 ,B 0 ,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γ i }, that can be written as a surface integral of the ratio L/Δ, with the surface in γ i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χ new 2 =χ old 2 +2lnΔ. This shows the fine-tuning cost to the likelihood (χ new 2 ) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χ new 2 /d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ 2 /d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.

  20. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  1. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  2. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  3. Simulation and design method in advanced nanomaterials fine-tuning for some perovskites type AHE study

    International Nuclear Information System (INIS)

    Mohorianu, S.; Lozovan, M.; Rusu, F.-V.

    2009-01-01

    Nanostructured materials with tailored properties are now essential for future applications in the current industrial manufacturing. Extracting valuable information from data by using the distributed computer processing and storage technologies, as well the Artificial Neural Network (ANN) and the development of advanced algorithms for knowledge discovery are the purpose of our work. We describe how a Simulation and Design Method (SDM) attempt, based on our last results, is applied on two perovskites type materials, La 0.7 Ca 0.3 MnO 3 and La 0.7 Sr 0.3 MnO 3 in order to study the Anomalous Hall Effect (AHE). Our new ANN model, is intended to contribute to the effort to improve some properties of new materials. It implements and uses the basic building blocks of neural computation, such as multi-layer perceptrons. ANN can learn associative patterns and approximate the functional relationship between a set of input and output. Modeling and simulation techniques affect all stages in the development and improvement of new materials, from the initial formation of concepts to synthesis and characterization of properties. A new SDM with ANN for some nanomagnetic materials was given. Neural networks have been applied successfully in the identification and classification of some nanomagnetic characteristics from a large amount of data. (authors)

  4. Greenhouse simulation models.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    A model is a representation of a real system to describe some properties i.e. internal factors of that system (out-puts) as function of some external factors (inputs). It is impossible to describe the relation between all internal factors (if even all internal factors could be defined) and all

  5. Theory and Examples of Mathematical Modeling for Fine Weave Pierced Fabric

    Directory of Open Access Journals (Sweden)

    ZHOU Yu-bo

    2017-04-01

    Full Text Available A mathematical abstraction and three-dimensional modeling method of three-dimensional woven fabric structure was developed for the fine weave pierced fabric, taking parametric continuity splines as the track function of tow. Based on the significant parameters of fine weave pierced fabric measured by MicroCT, eight kinds of the three-dimensional digital models of the fabric structure were established with two kinds of tow sections and four kinds of tow trajectory characteristic functions. There is a good agreement between the three-dimensional digital models and real fabric by comparing their structures and porosities. This mathematical abstraction and three-dimensional modeling method can be applied in micro models for sub unit cell and macro models for macroscopic scale fabrics, with high adaptability.

  6. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  7. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  8. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  9. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  10. Modeling the potential area of occupancy at fine resolution may reduce uncertainty in species range estimates

    DEFF Research Database (Denmark)

    Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo

    2012-01-01

    and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...... permit comparable measures among species. We conclude that estimates of AOO based on fine-resolution distribution models are more robust tools for risk assessment than traditional systems, allowing a better understanding of species ranges at habitat level....

  11. Numerical simulation of Higgs models

    International Nuclear Information System (INIS)

    Jaster, A.

    1995-10-01

    The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)

  12. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  13. Deep learning-based fine-grained car make/model classification for visual surveillance

    Science.gov (United States)

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  14. Plasma modelling and numerical simulation

    International Nuclear Information System (INIS)

    Van Dijk, J; Kroesen, G M W; Bogaerts, A

    2009-01-01

    Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)

  15. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  16. The fine-tuning cost of the likelihood in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...

  17. Mechanical Behavior Analysis of Y-Type S-SRC Column in a Large-Space Vertical Hybrid Structure Using Local Fine Numerical Simulation Method

    Directory of Open Access Journals (Sweden)

    Jianguang Yue

    2018-01-01

    Full Text Available In a large spatial structure, normally the important members are of special type and are the safety key for the global structure. In order to study the mechanical behavior details of the local member, it is difficult for the common test method to realize the complex spatial loading state of the local member. Therefore, a local-fine finite element model was proposed and a large-space vertical hybrid structure was numerically simulated. The seismic responses of the global structure and the Y-type S-SRC column were analyzed under El Centro seismic motions with the peak acceleration of 35 gal and 220 gal. The numerical model was verified with the results of the seismic shaking table test of the structure model. The failure mechanism and stiffness damage evolution of the Y-type S-SRC column were analyzed. The calculated results agreed well with the test results. It indicates that the local-fine FEM could reflect the mechanical details of the local members in a large spatial structure.

  18. A novel approach to finely tuned supersymmetric standard models: The case of the non-universal Higgs mass model

    Science.gov (United States)

    Yamaguchi, Masahiro; Yin, Wen

    2018-02-01

    Discarding the prejudice about fine tuning, we propose a novel and efficient approach to identify relevant regions of fundamental parameter space in supersymmetric models with some amount of fine tuning. The essential idea is the mapping of experimental constraints at a low-energy scale, rather than the parameter sets, to those of the fundamental parameter space. Applying this method to the non-universal Higgs mass model, we identify a new interesting superparticle mass pattern where some of the first two generation squarks are light whilst the stops are kept heavy as 6 TeV. Furthermore, as another application of this method, we show that the discrepancy of the muon anomalous magnetic dipole moment can be filled by a supersymmetric contribution within the 1{σ} level of the experimental and theoretical errors, which was overlooked by previous studies due to the extremely fine tuning required.

  19. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  20. Modeling and Simulation for Safeguards

    International Nuclear Information System (INIS)

    Swinhoe, Martyn T.

    2012-01-01

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  1. Bayesian network modelling on data from fine needle aspiration cytology examination for breast cancer diagnosis

    OpenAIRE

    Ding, Xuemei; Cao, Yi; Zhai, Jia; Maguire, Liam; Li, Yuhua; Yang, Hongqin; Wang, Yuhua; Zeng, Jinshu; Liu, Shuo

    2017-01-01

    The paper employed Bayesian network (BN) modelling approach to discover causal dependencies among different data features of Breast Cancer Wisconsin Dataset (BCWD) derived from openly sourced UCI repository. K2 learning algorithm and k-fold cross validation were used to construct and optimize BN structure. Compared to Na‹ve Bayes (NB), the obtained BN presented better performance for breast cancer diagnosis based on fine needle aspiration cytology (FNAC) examination. It also showed that, amon...

  2. Modeling and Simulation of Nanoindentation

    Science.gov (United States)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  3. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  4. NRTA simulation by modeling PFPF

    International Nuclear Information System (INIS)

    Asano, Takashi; Fujiwara, Shigeo; Takahashi, Saburo; Shibata, Junichi; Totsu, Noriko

    2003-01-01

    In PFPF, NRTA system has been applied since 1991. It has been confirmed by evaluating facility material accountancy data provided from operator in each IIV that a significant MUF was not generated. In case of throughput of PFPF scale, MUF can be evaluated with a sufficient detection probability by the present NRTA evaluation manner. However, by increasing of throughput, the uncertainty of material accountancy will increase, and the detection probability will decline. The relationship between increasing of throughput and declining of detection probability and the maximum throughput upon application of following measures with a sufficient detection probability were evaluated by simulation of NRTA system. This simulation was performed by modeling of PFPF. Measures for increasing detection probability are shown as follows. Shortening of the evaluation interval. Segmentation of evaluation area. This report shows the results of these simulations. (author)

  5. Evolution of the fine-structure constant in runaway dilaton models

    Energy Technology Data Exchange (ETDEWEB)

    Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofísica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Instituto de Astrofísica e Ciências do Espaço, CAUP, Rua das Estrelas, 4150-762 Porto (Portugal); Vielzeuf, P.E., E-mail: pvielzeuf@ifae.es [Institut de Física d' Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra (Barcelona) (Spain); Martinelli, M., E-mail: martinelli@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, Philosophenweg 16, 69120, Heidelberg (Germany); Calabrese, E., E-mail: erminia.calabrese@astro.ox.ac.uk [Sub-department of Astrophysics, University of Oxford, Keble Road, Oxford OX1 3RH (United Kingdom); Pandolfi, S., E-mail: stefania@dark-cosmology.dk [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark)

    2015-04-09

    We study the detailed evolution of the fine-structure constant α in the string-inspired runaway dilaton class of models of Damour, Piazza and Veneziano. We provide constraints on this scenario using the most recent α measurements and discuss ways to distinguish it from alternative models for varying α. For model parameters which saturate bounds from current observations, the redshift drift signal can differ considerably from that of the canonical ΛCDM paradigm at high redshifts. Measurements of this signal by the forthcoming European Extremely Large Telescope (E-ELT), together with more sensitive α measurements, will thus dramatically constrain these scenarios.

  6. Evolution of the fine-structure constant in runaway dilaton models

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Vielzeuf, P.E.; Martinelli, M.; Calabrese, E.; Pandolfi, S.

    2015-01-01

    We study the detailed evolution of the fine-structure constant α in the string-inspired runaway dilaton class of models of Damour, Piazza and Veneziano. We provide constraints on this scenario using the most recent α measurements and discuss ways to distinguish it from alternative models for varying α. For model parameters which saturate bounds from current observations, the redshift drift signal can differ considerably from that of the canonical ΛCDM paradigm at high redshifts. Measurements of this signal by the forthcoming European Extremely Large Telescope (E-ELT), together with more sensitive α measurements, will thus dramatically constrain these scenarios

  7. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Science.gov (United States)

    Warwick-Evans, Victoria C; Atkinson, Philip W; Robinson, Leonie A; Green, Jonathan A

    2016-01-01

    During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  8. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Directory of Open Access Journals (Sweden)

    Victoria C Warwick-Evans

    Full Text Available During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  9. Repository simulation model: Final report

    International Nuclear Information System (INIS)

    1988-03-01

    This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs

  10. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, M. A.; Vivanco, M. G.

    2015-07-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and Gonzalez et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  11. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, M.A.; Vivanco, M.

    2015-07-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and González et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  12. Modelling the fine and coarse fraction of heavy metals in Spain

    Science.gov (United States)

    García Vivanco, Marta; González, M. Angeles

    2014-05-01

    Heavy metals, such as cadmium, lead, nickel, arsenic, copper, chrome, zinc and selenium, are present in the air due to natural and anthropogenic emissions, normally joined to particles. These metals can affect life organisms via inhalation or ingestion, causing damages in human health and ecosystems. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT (Madrid, Spain) modelling group. Vivanco et al. (2011) and González et al. (2012) showed the model performance for some metals in Spain and Europe. However, in these studies, metals were considered as fine particles. Some studies based on observed heavy metals air concentration indicate the presence of metals also in the coarse fraction, in special for Cu and Zn. For this reason, a new attempt of modelling metals considering a fine (Arsenic, Lead, Cadmium and Nickel Ambient Air Concentrations in Spain, 2011. Proceedings of the 11 th International Conference on Computational Science and Its Applications (ICCSA 11) 243-246 - González, Ma Vivanco, Marta; Palomino, Inmaculada; Garrido, Juan; Santiago, Manuel; Bessagnet, Bertrand Modelling Some Heavy Metals Air Concentration in Europe. // Water, Air & Soil Pollution;Sep2012, Vol. 223 Issue 8, p5227

  13. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    International Nuclear Information System (INIS)

    Gonzalez, M. A.; Vivanco, M. G.

    2015-01-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and Gonzalez et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  14. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  15. Standard for Models and Simulations

    Science.gov (United States)

    Steele, Martin J.

    2016-01-01

    This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.

  16. Deep Learning versus Professional Healthcare Equipment: A Fine-Grained Breathing Rate Monitoring Model

    Directory of Open Access Journals (Sweden)

    Bang Liu

    2018-01-01

    Full Text Available In mHealth field, accurate breathing rate monitoring technique has benefited a broad array of healthcare-related applications. Many approaches try to use smartphone or wearable device with fine-grained monitoring algorithm to accomplish the task, which can only be done by professional medical equipment before. However, such schemes usually result in bad performance in comparison to professional medical equipment. In this paper, we propose DeepFilter, a deep learning-based fine-grained breathing rate monitoring algorithm that works on smartphone and achieves professional-level accuracy. DeepFilter is a bidirectional recurrent neural network (RNN stacked with convolutional layers and speeded up by batch normalization. Moreover, we collect 16.17 GB breathing sound recording data of 248 hours from 109 and another 10 volunteers to train and test our model, respectively. The results show a reasonably good accuracy of breathing rate monitoring.

  17. Fine-Tuning Neural Patient Question Retrieval Model with Generative Adversarial Networks.

    Science.gov (United States)

    Tang, Guoyu; Ni, Yuan; Wang, Keqiang; Yong, Qin

    2018-01-01

    The online patient question and answering (Q&A) system attracts an increasing amount of users in China. Patient will post their questions and wait for doctors' response. To avoid the lag time involved with the waiting and to reduce the workload on the doctors, a better method is to automatically retrieve the semantically equivalent question from the archive. We present a Generative Adversarial Networks (GAN) based approach to automatically retrieve patient question. We apply supervised deep learning based approaches to determine the similarity between patient questions. Then a GAN framework is used to fine-tune the pre-trained deep learning models. The experiment results show that fine-tuning by GAN can improve the performance.

  18. Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data

    Science.gov (United States)

    Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.

    2015-12-01

    Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.

  19. Projected future vegetation changes for the northwest United States and southwest Canada at a fine spatial resolution using a dynamic global vegetation model.

    Science.gov (United States)

    Shafer, Sarah; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.

  20. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  1. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  2. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  3. SEMI Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Hermina, W.L.

    2000-10-02

    With the exponential growth in the power of computing hardware and software, modeling and simulation is becoming a key enabler for the rapid design of reliable Microsystems. One vision of the future microsystem design process would include the following primary software capabilities: (1) The development of 3D part design, through standard CAD packages, with automatic design rule checks that guarantee the manufacturability and performance of the microsystem. (2) Automatic mesh generation, for 3D parts as manufactured, that permits computational simulation of the process steps, and the performance and reliability analysis for the final microsystem. (3) Computer generated 2D layouts for process steps that utilize detailed process models to generate the layout and process parameter recipe required to achieve the desired 3D part. (4) Science-based computational tools that can simulate the process physics, and the coupled thermal, fluid, structural, solid mechanics, electromagnetic and material response governing the performance and reliability of the microsystem. (5) Visualization software that permits the rapid visualization of 3D parts including cross-sectional maps, performance and reliability analysis results, and process simulation results. In addition to these desired software capabilities, a desired computing infrastructure would include massively parallel computers that enable rapid high-fidelity analysis, coupled with networked compute servers that permit computing at a distance. We now discuss the individual computational components that are required to achieve this vision. There are three primary areas of focus: design capabilities, science-based capabilities and computing infrastructure. Within each of these areas, there are several key capability requirements.

  4. Photovoltaic array performance simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  5. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...... than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior...

  6. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  7. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  8. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  9. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  10. Modeling of episodic particulate matter events using a 3-D air quality model with fine grid: Applications to a pair of cities in the US/Mexico border

    Science.gov (United States)

    Choi, Yu-Jin; Hyde, Peter; Fernando, H. J. S.

    High (episodic) particulate matter (PM) events over the sister cities of Douglas (AZ) and Agua Prieta (Sonora), located in the US-Mexico border, were simulated using the 3D Eulerian air quality model, MODELS-3/CMAQ. The best available input information was used for the simulations, with pollution inventory specified on a fine grid. In spite of inherent uncertainties associated with the emission inventory as well as the chemistry and meteorology of the air quality simulation tool, model evaluations showed acceptable PM predictions, while demonstrating the need for including the interaction between meteorology and emissions in an interactive mode in the model, a capability currently unavailable in MODELS-3/CMAQ when dealing with PM. Sensitivity studies on boundary influence indicate an insignificant regional (advection) contribution of PM to the study area. The contribution of secondary particles to the occurrence of high PM events was trivial. High PM episodes in the study area, therefore, are purely local events that largely depend on local meteorological conditions. The major PM emission sources were identified as vehicular activities on unpaved/paved roads and wind-blown dust. The results will be of immediate utility in devising PM mitigation strategies for the study area, which is one of the US EPA-designated non-attainment areas with respect to PM.

  11. Study on fine geological modelling of the fluvial sandstone reservoir in Daqing oilfield

    Energy Technology Data Exchange (ETDEWEB)

    Zhoa Han-Qing [Daqing Research Institute, Helongjiang (China)

    1997-08-01

    These paper aims at developing a method for fine reservoir description in maturing oilfields by using close spaced well logging data. The main productive reservoirs in Daqing oilfield is a set of large fluvial-deltaic deposits in the Songliao Lake Basin, characterized by multi-layers and serious heterogeneities. Various fluvial channel sandstone reservoirs cover a fairly important proportion of reserves. After a long period of water flooding, most of them have turned into high water cut layers, but there are considerable residual reserves within them, which are difficult to find and tap. Making fine reservoir description and developing sound a geological model is essential for tapping residual oil and enhancing oil recovery. The principal reason for relative lower precision of predicting model developed by using geostatistics is incomplete recognition of complex distribution of fluvial reservoirs and their internal architecture`s. Tasking advantage of limited outcrop data from other regions (suppose no outcrop data available in oilfield) can only provide the knowledge of subtle changing of reservoir parameters and internal architecture. For the specific geometry distribution and internal architecture of subsurface reservoirs (such as in produced regions) can be gained only from continuous infilling logging well data available from studied areas. For developing a geological model, we think the first important thing is to characterize sandbodies geometries and their general architecture`s, which are the framework of models, and then the slight changing of interwell parameters and internal architecture`s, which are the contents and cells of the model. An excellent model should possess both of them, but the geometry is the key to model, because it controls the contents and cells distribution within a model.

  12. Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning

    Directory of Open Access Journals (Sweden)

    Eric G. Cavalcanti

    2018-04-01

    Full Text Available Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.

  13. Impulse pumping modelling and simulation

    International Nuclear Information System (INIS)

    Pierre, B; Gudmundsson, J S

    2010-01-01

    Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.

  14. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    Larsen, N.

    1987-03-01

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  15. Modelization and numerical simulation of atmospheric aerosols dynamics

    International Nuclear Information System (INIS)

    Debry, Edouard

    2004-01-01

    Chemical-transport models are now able to describe in a realistic way gaseous pollutants behavior in the atmosphere. Nevertheless atmospheric pollution also exists as a fine suspended particles, called aerosols which interact with gaseous phase, solar radiation, and have their own dynamic behavior. The goal of this thesis is the modelization and numerical simulation of the General Dynamic Equation of aerosols (GDE). Part I deals with some theoretical aspects of aerosol modelization. Part II is dedicated to the building of one size resolved aerosol model (SIREAM). In part III we perform the reduction of this model in order to use it in dispersion models as POLAIR3D. Several modelization issues are still opened: organic aerosol matter, externally mixed aerosols, coupling with turbulent mixing, and nano-particles. (author) [fr

  16. Measurement of the fine-structure constant as a test of the Standard Model

    Science.gov (United States)

    Parker, Richard H.; Yu, Chenghui; Zhong, Weicheng; Estey, Brian; Müller, Holger

    2018-04-01

    Measurements of the fine-structure constant α require methods from across subfields and are thus powerful tests of the consistency of theory and experiment in physics. Using the recoil frequency of cesium-133 atoms in a matter-wave interferometer, we recorded the most accurate measurement of the fine-structure constant to date: α = 1/137.035999046(27) at 2.0 × 10‑10 accuracy. Using multiphoton interactions (Bragg diffraction and Bloch oscillations), we demonstrate the largest phase (12 million radians) of any Ramsey-Bordé interferometer and control systematic effects at a level of 0.12 part per billion. Comparison with Penning trap measurements of the electron gyromagnetic anomaly ge ‑ 2 via the Standard Model of particle physics is now limited by the uncertainty in ge ‑ 2; a 2.5σ tension rejects dark photons as the reason for the unexplained part of the muon’s magnetic moment at a 99% confidence level. Implications for dark-sector candidates and electron substructure may be a sign of physics beyond the Standard Model that warrants further investigation.

  17. Modelling Soil-Landscapes in Coastal California Hills Using Fine Scale Terrestrial Lidar

    Science.gov (United States)

    Prentice, S.; Bookhagen, B.; Kyriakidis, P. C.; Chadwick, O.

    2013-12-01

    Digital elevation models (DEMs) are the dominant input to spatially explicit digital soil mapping (DSM) efforts due to their increasing availability and the tight coupling between topography and soil variability. Accurate characterization of this coupling is dependent on DEM spatial resolution and soil sampling density, both of which may limit analyses. For example, DEM resolution may be too coarse to accurately reflect scale-dependent soil properties yet downscaling introduces artifactual uncertainty unrelated to deterministic or stochastic soil processes. We tackle these limitations through a DSM effort that couples moderately high density soil sampling with a very fine scale terrestrial lidar dataset (20 cm) implemented in a semiarid rolling hillslope domain where terrain variables change rapidly but smoothly over short distances. Our guiding hypothesis is that in this diffusion-dominated landscape, soil thickness is readily predicted by continuous terrain attributes coupled with catenary hillslope segmentation. We choose soil thickness as our keystone dependent variable for its geomorphic and hydrologic significance, and its tendency to be a primary input to synthetic ecosystem models. In defining catenary hillslope position we adapt a logical rule-set approach that parses common terrain derivatives of curvature and specific catchment area into discrete landform elements (LE). Variograms and curvature-area plots are used to distill domain-scale terrain thresholds from short range order noise characteristic of very fine-scale spatial data. The revealed spatial thresholds are used to condition LE rule-set inputs, rendering a catenary LE map that leverages the robustness of fine-scale terrain data to create a generalized interpretation of soil geomorphic domains. Preliminary regressions show that continuous terrain variables alone (curvature, specific catchment area) only partially explain soil thickness, and only in a subset of soils. For example, at spatial

  18. Galaxy Alignments: Theory, Modelling & Simulations

    Science.gov (United States)

    Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais

    2015-11-01

    The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.

  19. Constitutive modelling of the undrained shear strength of fine grained soils containing gas

    Energy Technology Data Exchange (ETDEWEB)

    Grozic, J.L.H. [Calgary Univ., AB (Canada); Nadim, F.; Kvalstad, T.J. [Norwegian Geotechnical Inst., Oslo (Norway)

    2002-07-01

    The behaviour of fine grained gassy soils was studied in order to develop a technique to quantitatively evaluate geohazards. Gas can occur in seabeds either in solution in pore water, undissolved in the form of gas filled voids, or as gas hydrates. In offshore soils, the degree of saturation is generally greater than 90 per cent, resulting in a soil structure with a continuous water phase and a discontinuous gas phase. The presence of methane gas will impact the strength of the soil, which alters its resistance to submarine sliding. This paper presents a constitutive model for determining the undrained shear strength of fine-grained gassy soils to assess the stability of deep water marine slopes for offshore developments. Methane gas is shown to have a beneficial effect on the soil strength in compressive loading, but the peak strength is achieved at larger deformations. The increased strength is a result of compression and solution gas which cause partial drainage and reduced pore pressures. The undrained shear strength of gassy soils was shown to increase with increasing initial consolidation stress, increasing volumetric coefficient of solubility, and increasing initial void ratio. 9 refs., 3 tabs., 6 figs.

  20. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  1. Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction

    KAUST Repository

    Ghasemi, Mohammadreza

    2015-02-23

    In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.

  2. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  3. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  4. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  5. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    Science.gov (United States)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  6. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  7. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  8. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  9. Fine particulate matter estimated by mathematical model and hospitalizations for pneumonia and asthma in children

    Directory of Open Access Journals (Sweden)

    Ana Cristina Gobbo César

    2016-03-01

    Full Text Available Abstract Objective: To estimate the association between exposure to fine particulate matter with an aerodynamic diameter <2.5 microns (PM2.5 and hospitalizations for pneumonia and asthma in children. Methods: An ecological study of time series was performed, with daily indicators of hospitalization for pneumonia and asthma in children up to 10 years of age, living in Taubaté (SP and estimated concentrations of PM2.5, between August 2011 and July 2012. A generalized additive model of Poisson regression was used to estimate the relative risk, with lag zero up to five days after exposure; the single pollutant model was adjusted by the apparent temperature, as defined from the temperature and relative air humidity, seasonality and weekday. Results: The values of the relative risks for hospitalization for pneumonia and asthma were significant for lag 0 (RR=1.051, 95%CI; 1.016 to 1.088; lag 2 (RR=1.066, 95%CI: 1.023 to 1.113; lag 3 (RR=1.053, 95%CI: 1.015 to 1.092; lag 4 (RR=1.043, 95%CI: 1.004 to 1.088 and lag 5 (RR=1.061, 95%CI: 1.018 to 1.106. The increase of 5mcg/m3 in PM2.5 contributes to increase the relative risk for hospitalization from 20.3 to 38.4 percentage points; however, the reduction of 5µg/m3 in PM2.5 concentration results in 38 fewer hospital admissions. Conclusions: Exposure to PM2.5 was associated with hospitalizations for pneumonia and asthma in children younger than 10 years of age, showing the role of fine particulate matter in child health and providing subsidies for the implementation of preventive measures to decrease these outcomes.

  10. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  11. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  12. Fine chemistry

    International Nuclear Information System (INIS)

    Laszlo, P.

    1988-01-01

    The 1988 progress report of the Fine Chemistry laboratory (Polytechnic School, France) is presented. The research programs are centered on the renewal of the organic chemistry most important reactions and on the invention of new, highly efficient and highly selective reactions, by applying low cost reagents and solvents. An important research domain concerns the study and fabrication of new catalysts. They are obtained by means of the reactive sputtering of the metals and metal oxydes thin films. The Monte Carlo simulations of the long-range electrostatic interaction in a clay and the obtention of acrylamides from anhydrous or acrylic ester are summarized. Moreover, the results obtained in the field of catalysis are also given. The published papers and the congress communications are included [fr

  13. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  14. Deficits in fine motor skills in a genetic animal model of ADHD

    Directory of Open Access Journals (Sweden)

    Qian Yu

    2010-09-01

    Full Text Available Abstract Background In an attempt to model some behavioral aspects of Attention Deficit/Hyperactivity Disorder (ADHD, we examined whether an existing genetic animal model of ADHD is valid for investigating not only locomotor hyperactivity, but also more complex motor coordination problems displayed by the majority of children with ADHD. Methods We subjected young adolescent Spontaneously Hypertensive Rats (SHRs, the most commonly used genetic animal model of ADHD, to a battery of tests for motor activity, gross motor coordination, and skilled reaching. Wistar (WIS rats were used as controls. Results Similar to children with ADHD, young adolescent SHRs displayed locomotor hyperactivity in a familiar, but not in a novel environment. They also had lower performance scores in a complex skilled reaching task when compared to WIS rats, especially in the most sensitive measure of skilled performance (i.e., single attempt success. In contrast, their gross motor performance on a Rota-Rod test was similar to that of WIS rats. Conclusion The results support the notion that the SHR strain is a useful animal model system to investigate potential molecular mechanisms underlying fine motor skill problems in children with ADHD.

  15. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  16. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  17. MATH MODELING OF CAST FINE-GRAINED CONCRETE WITH INDUSTRIAL WASTES OF COPPER PRODUCTION

    Directory of Open Access Journals (Sweden)

    Tsybakin Sergey Valerievich

    2017-10-01

    Full Text Available Subject: applying mineral microfillers on the basis of technogenic wastes of non-ferrous metallurgy in the technology of cast and self-compacting concrete. The results of experiments of scientists from Russia, Kazakhstan, Poland and India show that copper smelting granulated slag can be used when grinding construction cements as a mineral additive up to 30 % without significantly reducing activity of the cements. However, there are no results of a comprehensive study of influence of the slag on plastic concrete mixtures. Research objectives: establishment of mathematical relationship of the influence of copper slag on the compressive strength and density of concrete after 28 days of hardening in normal conditions using the method of mathematical design of experiments; statistical processing of the results and verification of adequacy of the developed model. Materials and methods: mathematical experimental design was carried out as a full 4-factor experiment using rotatable central composite design. The mathematical model is selected in the form of a polynomial of the second degree using four factors of the response function. Results: 4-factor mathematical model of concrete strength and density after curing is created, regression equation is derived for dependence of the 28-days strength function and density on concentration of the cement stone, true water-cement ratio, dosage of fine copper slag and superplasticizer on the basis of ether polycarboxylates. Statistical processing of the results of mathematical design of experiments is carried out, estimate of adequacy of the constructed mathematical model is obtained. Conclusions: it is established that introduction of copper smelting slag in the range of 30…50 % by weight of cement positively affects the strength of concrete when used together with the superplasticizer. Increasing the dosage of superplasticizer in excess of 0.16 % of the dry component leads to a decrease in the strength of cast

  18. Numerical modeling of drying and consolidation of fine sediments and tailings

    NARCIS (Netherlands)

    Van der Meulen, J.; Van Tol, A.F.; Van Paassen, L.A.; Heimovaara, T.J.

    2012-01-01

    The extraction and processing of many mineral ores result in the generation of large volumes of fine-grained residue or tailings. These fine sediments are deposited as a slurry with very high water contents and lose water after deposition due to self-weight consolidation. When the surface is exposed

  19. Modeling and Simulation of Low Voltage Arcs

    NARCIS (Netherlands)

    Ghezzi, L.; Balestrero, A.

    2010-01-01

    Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical

  20. Dynamic magnetization models for soft ferromagnetic materials with coarse and fine domain structures

    Energy Technology Data Exchange (ETDEWEB)

    Zirka, S.E., E-mail: zirka@email.dp.ua [Department of Physics and Technology, Dnepropetrovsk National University, Gagarin 72, 49050 Dnepropetrovsk (Ukraine); Moroz, Y.I. [Department of Physics and Technology, Dnepropetrovsk National University, Gagarin 72, 49050 Dnepropetrovsk (Ukraine); Steentjes, S.; Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Schinkelstr. 4, 52056 Aachen (Germany); Chwastek, K. [Faculty of Electrical Engineering, Czestochowa University of Technology, al. AK 17, 42-201 Czestochowa (Poland); Zurek, S. [Megger Instruments Ltd., Archcliffe Road, Dover, Kent, CT17 9EN (United Kingdom); Harrison, R.G. [Department of Electronics, Carleton University, Ottawa, Canada K1S 5B6 (Canada)

    2015-11-15

    We consider dynamic models, both numerical and analytical, that reproduce the magnetization field H(B) and the energy loss in ferromagnetic sheet materials with different domain structures. Conventional non-oriented (NO) and grain-oriented (GO) electrical steels are chosen as typical representatives of fine-domain and coarse-domain materials. The commonly-accepted loss separation procedures in these materials are critically analyzed. The use of a well-known simplified (“classical”) expression for the eddy-current loss is identified as the primary source of mistaken evaluations of excess loss in NO steel, in which the loss components can only be evaluated using the Maxwell (penetration) equation. The situation is quite different in GO steel, in which the loss separation is uncertain, but the total dynamic loss is several times higher than that explained by any version (numerical or analytical) of the classical approach. To illustrate the uncertainty of the loss separation in GO steel, we show that the magnetization field, and thus the total loss, in this material can be represented with equal accuracy using either the existing three-component approach or our proposed two-component technique, which makes no distinction between classical eddy-current and excess fields and losses. - Highlights: • Critical analysis of a ferromagnetic-material loss-separation principle. • This is to warn materials-science engineers about the inaccuracies resulting from this principle. • A transient model having a single dynamic component is proposed.

  1. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  2. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  3. Standard model and fine structure constant at Planck distances in the Bennett-Brene-Nielsen-Picek random dynamics

    International Nuclear Information System (INIS)

    Laperashvili, L.V.

    1994-01-01

    The first part of the present paper contains a review of papers by Nielsen, Bennett, Brene and Picek which underly the model called random dynamics. The second part of the paper is devoted to calculating the fine structure constant by means of the path integration in the U(1)-lattice gauge theory

  4. Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter

    Science.gov (United States)

    Liu, Xiaozhen; Frey, H. Christopher

    2012-01-01

    A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000

  5. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  6. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  7. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  8. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  9. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    Science.gov (United States)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  10. Diagnostic Air Quality Model Evaluation of Source-Specific Primary and Secondary Fine Particulate Carbon

    Science.gov (United States)

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004–February 2005) provided an unprecedented opportunity to diagnostically evaluate...

  11. Fine Scale ANUClimate Data for Ecosystem Modeling and Assessment of Plant Functional Types

    Science.gov (United States)

    Hutchinson, M. F.; Kesteven, J. L.; Xu, T.; Evans, B. J.; Togashi, H. F.; Stein, J. L.

    2015-12-01

    High resolution spatially extended values of climate variables play a central role in the assessment of climate and projected future climate in ecosystem modeling. The ground based meteorological network remains a key resource for deriving these spatially extended climate variables. We report on the production, and applications, of new anomaly based fine scale spatial interpolations of key climate variables at daily and monthly time scale, across the Australian continent. The methods incorporate several innovations that have significantly improved spatial predictive accuracy, as well as providing a platform for the incorporation of additional remotely sensed data. The interpolated climate data are supporting many continent-wide ecosystem modeling applications and are playing a key role in testing optimality hypotheses associated with plant functional types (PFTs). The accuracy, and robustness to data error, of anomaly-based interpolation has been enhanced by incorporating physical process aspects of the different climate variables and employing robust statistical methods implemented in the ANUSPLIN package. New regression procedures have also been developed to estimate "background" monthly climate normals from all stations with minimal records to substantially increase the density of supporting spatial networks. Monthly mean temperature interpolation has been enhanced by incorporating process based coastal effects that have reduced predictive error by around 10%. Overall errors in interpolated monthly temperature fields are around 25% less than errors reported by an earlier study. For monthly and daily precipitation, a new anomaly structure has been devised to take account of the skewness in precipitation data and the large proportion of zero values that present significant challenges to standard interpolation methods. The many applications include continent-wide Gross Primary Production modeling and assessing constraints on light and water use efficiency derived

  12. Simulation models for tokamak plasmas

    International Nuclear Information System (INIS)

    Dimits, A.M.; Cohen, B.I.

    1992-01-01

    Two developments in the nonlinear simulation of tokamak plasmas are described: (A) Simulation algorithms that use quasiballooning coordinates have been implemented in a 3D fluid code and a 3D partially linearized (Δf) particle code. In quasiballooning coordinates, one of the coordinate directions is closely aligned with that of the magnetic field, allowing both optimal use of the grid resolution for structures highly elongated along the magnetic field as well as implementation of the correct periodicity conditions with no discontinuities in the toroidal direction. (B) Progress on the implementation of a likeparticle collision operator suitable for use in partially linearized particle codes is reported. The binary collision approach is shown to be unusable for this purpose. The algorithm under development is a complete version of the test-particle plus source-field approach that was suggested and partially implemented by Xu and Rosenbluth

  13. A model management system for combat simulation

    OpenAIRE

    Dolk, Daniel R.

    1986-01-01

    The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...

  14. HVDC System Characteristics and Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)

    2001-07-01

    This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.

  15. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  16. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  17. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  18. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  19. Validity of a traffic air pollutant dispersion model to assess exposure to fine particles.

    Science.gov (United States)

    Kostrzewa, Aude; Reungoat, Patrice; Raherison, Chantal

    2009-08-01

    Fine particles (PM(2.5)) are an important component of air pollution. Epidemiological studies have shown health effects due to ambient air particles, particularly allergies in children. Since the main difficulty is to determine exposure to such pollution, traffic air pollutant (TAP) dispersions models have been developed to improve the estimation of individual exposure levels. One such model, the ExTra index, has been validated for nitrogen oxide concentrations but not for other pollutants. The purpose of this study was to assess the validity of the ExTra index to assess PM(2.5) exposure. We compared PM(2.5) concentrations calculated by the ExTra index to reference measures (passive samplers situated under the covered part of the playground), in 15 schools in Bordeaux, in 2000. First, we collected the input data required by the ExTra index: background and local pollution depending on traffic, meteorology and topography. Second, the ExTra index was calculated for each school. Statistical analysis consisted of a graphic description; then, we calculated an intraclass correlation coefficient. Concentrations calculated with the ExTra index and the reference method were similar. The ExTra index underestimated exposure by 2.2 microg m(-3) on average compared to the reference method. The intraclass correlation coefficient was 0.85 and its 95% confidence interval was [0.62; 0.95]. The results suggest that the ExTra index provides an assessment of PM(2.5) exposure similar to that of the reference method. Although caution is required in interpreting these results owing to the small number of sites, the ExTra index could be a useful epidemiological tool for reconstructing individual exposure, an important challenge in epidemiology.

  20. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  1. Modeling and simulation for RF system design

    CERN Document Server

    Frevert, Ronny; Jancke, Roland; Knöchel, Uwe; Schwarz, Peter; Kakerow, Ralf; Darianian, Mohsen

    2005-01-01

    Focusing on RF specific modeling and simulation methods, and system and circuit level descriptions, this work contains application-oriented training material. Accompanied by a CD- ROM, it combines the presentation of a mixed-signal design flow, an introduction into VHDL-AMS and Verilog-A, and the application of commercially available simulators.

  2. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  3. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  4. Simulation and Modeling of Flow in a Gas Compressor

    Directory of Open Access Journals (Sweden)

    Anna Avramenko

    2015-01-01

    Full Text Available The presented research demonstrates the results of a series of numerical simulations of gas flow through a single-stage centrifugal compressor with a vaneless diffuser. Numerical results were validated with experiments consisting of eight regimes with different mass flow rates. The steady-state and unsteady simulations were done in ANSYS FLUENT 13.0 and NUMECA FINE/TURBO 8.9.1 for one-period geometry due to periodicity of the problem. First-order discretization is insufficient due to strong dissipation effects. Results obtained with second-order discretization agree with the experiments for the steady-state case in the region of high mass flow rates. In the area of low mass flow rates, nonstationary effects significantly influence the flow leading stationary model to poor prediction. Therefore, the unsteady simulations were performed in the region of low mass flow rates. Results of calculation were compared with experimental data. The numerical simulation method in this paper can be used to predict compressor performance.

  5. NUMERICAL SIMULATION AND MODELING OF UNSTEADY FLOW ...

    African Journals Online (AJOL)

    2014-06-30

    Jun 30, 2014 ... objective of this study is to control the simulation of unsteady flows around structures. ... Aerospace, our results were in good agreement with experimental .... Two-Equation Eddy-Viscosity Turbulence Models for Engineering.

  6. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  7. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  8. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  9. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  10. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  11. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  12. Mathematical Model of Transfer and Deposition of Finely Dispersed Particles in a Turbulent Flow of Emulsions and Suspensions

    Science.gov (United States)

    Laptev, A. G.; Basharov, M. M.

    2018-05-01

    The problem of modeling turbulent transfer of finely dispersed particles in liquids has been considered. An approach is used where the transport of particles is represented in the form of a variety of the diffusion process with the coefficient of turbulent transfer to the wall. Differential equations of transfer are written for different cases, and a solution of the cell model is obtained for calculating the efficiency of separation in a channel. Based on the theory of turbulent transfer of particles and of the boundary layer model, an expression has been obtained for calculating the rate of turbulent deposition of finely dispersed particles. The application of this expression in determining the efficiency of physical coagulation of emulsions in different channels and on the surface of chaotic packings is shown.

  13. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  14. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    Science.gov (United States)

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful

  15. Parameter and model uncertainty in a life-table model for fine particles (PM2.5: a statistical modeling study

    Directory of Open Access Journals (Sweden)

    Jantunen Matti J

    2007-08-01

    Full Text Available Abstract Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5 are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i plausibility of mortality outcomes and (ii lag, and parameter uncertainties (iii exposure-response coefficients for different mortality outcomes, and (iv exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure

  16. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  17. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  18. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  19. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  20. Formulating Fine to Medium Sand Erosion for Suspended Sediment Transport Models

    Directory of Open Access Journals (Sweden)

    François Dufois

    2015-08-01

    Full Text Available The capacity of an advection/diffusion model to predict sand transport under varying wave and current conditions is evaluated. The horizontal sand transport rate is computed by vertical integration of the suspended sediment flux. A correction procedure for the near-bed concentration is proposed so that model results are independent of the vertical resolution. The method can thus be implemented in regional models with operational applications. Simulating equilibrium sand transport rates, when erosion and deposition are balanced, requires a new empirical erosion law that involves the non-dimensional excess shear stress and a parameter that depends on the size of the sand grain. Comparison with several datasets and sediment transport formulae demonstrated the model’s capacity to simulate sand transport rates for a large range of current and wave conditions and sand diameters in the range 100–500 μm. Measured transport rates were predicted within a factor two in 67% of cases with current only and in 35% of cases with both waves and current. In comparison with the results obtained by Camenen and Larroudé (2003, who provided the same indicators for several practical transport rate formulations (whose means are respectively 72% and 37%, the proposed approach gives reasonable results. Before fitting a new erosion law to our model, classical erosion rate formulations were tested but led to poor comparisons with expected sediment transport rates. We suggest that classical erosion laws should be used with care in advection/diffusion models similar to ours, and that at least a full validation procedure for transport rates involving a range of sand diameters and hydrodynamic conditions should be carried out.

  1. Use of spatially distributed time-integrated sediment sampling networks and distributed fine sediment modelling to inform catchment management.

    Science.gov (United States)

    Perks, M T; Warburton, J; Bracken, L J; Reaney, S M; Emery, S B; Hirst, S

    2017-11-01

    Under the EU Water Framework Directive, suspended sediment is omitted from environmental quality standards and compliance targets. This omission is partly explained by difficulties in assessing the complex dose-response of ecological communities. But equally, it is hindered by a lack of spatially distributed estimates of suspended sediment variability across catchments. In this paper, we demonstrate the inability of traditional, discrete sampling campaigns for assessing exposure to fine sediment. Sampling frequencies based on Environmental Quality Standard protocols, whilst reflecting typical manual sampling constraints, are unable to determine the magnitude of sediment exposure with an acceptable level of precision. Deviations from actual concentrations range between -35 and +20% based on the interquartile range of simulations. As an alternative, we assess the value of low-cost, suspended sediment sampling networks for quantifying suspended sediment transfer (SST). In this study of the 362 km 2 upland Esk catchment we observe that spatial patterns of sediment flux are consistent over the two year monitoring period across a network of 17 monitoring sites. This enables the key contributing sub-catchments of Butter Beck (SST: 1141 t km 2 yr -1 ) and Glaisdale Beck (SST: 841 t km 2 yr -1 ) to be identified. The time-integrated samplers offer a feasible alternative to traditional infrequent and discrete sampling approaches for assessing spatio-temporal changes in contamination. In conjunction with a spatially distributed diffuse pollution model (SCIMAP), time-integrated sediment sampling is an effective means of identifying critical sediment source areas in the catchment, which can better inform sediment management strategies for pollution prevention and control. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  3. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  4. Impact of vehicular emissions on the formation of fine particles in the Sao Paulo Metropolitan Area: a numerical study with the WRF-Chem model

    Directory of Open Access Journals (Sweden)

    A. Vara-Vela

    2016-01-01

    Full Text Available The objective of this work is to evaluate the impact of vehicular emissions on the formation of fine particles (PM2.5;  ≤  2.5 µm in diameter in the Sao Paulo Metropolitan Area (SPMA in Brazil, where ethanol is used intensively as a fuel in road vehicles. The Weather Research and Forecasting with Chemistry (WRF-Chem model, which simulates feedbacks between meteorological variables and chemical species, is used as a photochemical modelling tool to describe the physico-chemical processes leading to the evolution of number and mass size distribution of particles through gas-to-particle conversion. A vehicular emission model based on statistical information of vehicular activity is applied to simulate vehicular emissions over the studied area. The simulation has been performed for a 1-month period (7 August–6 September 2012 to cover the availability of experimental data from the NUANCE-SPS (Narrowing the Uncertainties on Aerosol and Climate Changes in Sao Paulo State project that aims to characterize emissions of atmospheric aerosols in the SPMA. The availability of experimental measurements of atmospheric aerosols and the application of the WRF-Chem model made it possible to represent some of the most important properties of fine particles in the SPMA such as the mass size distribution and chemical composition, besides allowing us to evaluate its formation potential through the gas-to-particle conversion processes. Results show that the emission of primary gases, mostly from vehicles, led to a production of secondary particles between 20 and 30 % in relation to the total mass concentration of PM2.5 in the downtown SPMA. Each of PM2.5 and primary natural aerosol (dust and sea salt contributed with 40–50 % of the total PM10 (i.e. those  ≤  10 µm in diameter concentration. Over 40 % of the formation of fine particles, by mass, was due to the emission of hydrocarbons, mainly aromatics. Furthermore, an increase in the

  5. Impact of vehicular emissions on the formation of fine particles in the Sao Paulo Metropolitan Area: a numerical study with the WRF-Chem model

    Science.gov (United States)

    Vara-Vela, A.; Andrade, M. F.; Kumar, P.; Ynoue, R. Y.; Muñoz, A. G.

    2016-01-01

    The objective of this work is to evaluate the impact of vehicular emissions on the formation of fine particles (PM2.5; ≤ 2.5 µm in diameter) in the Sao Paulo Metropolitan Area (SPMA) in Brazil, where ethanol is used intensively as a fuel in road vehicles. The Weather Research and Forecasting with Chemistry (WRF-Chem) model, which simulates feedbacks between meteorological variables and chemical species, is used as a photochemical modelling tool to describe the physico-chemical processes leading to the evolution of number and mass size distribution of particles through gas-to-particle conversion. A vehicular emission model based on statistical information of vehicular activity is applied to simulate vehicular emissions over the studied area. The simulation has been performed for a 1-month period (7 August-6 September 2012) to cover the availability of experimental data from the NUANCE-SPS (Narrowing the Uncertainties on Aerosol and Climate Changes in Sao Paulo State) project that aims to characterize emissions of atmospheric aerosols in the SPMA. The availability of experimental measurements of atmospheric aerosols and the application of the WRF-Chem model made it possible to represent some of the most important properties of fine particles in the SPMA such as the mass size distribution and chemical composition, besides allowing us to evaluate its formation potential through the gas-to-particle conversion processes. Results show that the emission of primary gases, mostly from vehicles, led to a production of secondary particles between 20 and 30 % in relation to the total mass concentration of PM2.5 in the downtown SPMA. Each of PM2.5 and primary natural aerosol (dust and sea salt) contributed with 40-50 % of the total PM10 (i.e. those ≤ 10 µm in diameter) concentration. Over 40 % of the formation of fine particles, by mass, was due to the emission of hydrocarbons, mainly aromatics. Furthermore, an increase in the number of small particles impaired the

  6. Plasma disruption modeling and simulation

    International Nuclear Information System (INIS)

    Hassanein, A.

    1994-01-01

    Disruptions in tokamak reactors are considered a limiting factor to successful operation and reliable design. The behavior of plasma-facing components during a disruption is critical to the overall integrity of the reactor. Erosion of plasma facing-material (PFM) surfaces due to thermal energy dump during the disruption can severely limit the lifetime of these components and thus diminish the economic feasibility of the reactor. A comprehensive understanding of the interplay of various physical processes during a disruption is essential for determining component lifetime and potentially improving the performance of such components. There are three principal stages in modeling the behavior of PFM during a disruption. Initially, the incident plasma particles will deposit their energy directly on the PFM surface, heating it to a very high temperature where ablation occurs. Models for plasma-material interactions have been developed and used to predict material thermal evolution during the disruption. Within a few microseconds after the start of the disruption, enough material is vaporized to intercept most of the incoming plasma particles. Models for plasma-vapor interactions are necessary to predict vapor cloud expansion and hydrodynamics. Continuous heating of the vapor cloud above the material surface by the incident plasma particles will excite, ionize, and cause vapor atoms to emit thermal radiation. Accurate models for radiation transport in the vapor are essential for calculating the net radiated flux to the material surface which determines the final erosion thickness and consequently component lifetime. A comprehensive model that takes into account various stages of plasma-material interaction has been developed and used to predict erosion rates during reactor disruption, as well during induced disruption in laboratory experiments

  7. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  8. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  9. Explaining the spatiotemporal variation of fine particle number concentrations over Beijing and surrounding areas in an air quality model with aerosol microphysics

    International Nuclear Information System (INIS)

    Chen, Xueshun; Wang, Zifa; Li, Jie; Chen, Huansheng; Hu, Min; Yang, Wenyi; Wang, Zhe; Ge, Baozhu; Wang, Dawei

    2017-01-01

    In this study, a three-dimensional air quality model with detailed aerosol microphysics (NAQPMS + APM) was applied to simulate the fine particle number size distribution and to explain the spatiotemporal variation of fine particle number concentrations in different size ranges over Beijing and surrounding areas in the haze season (Jan 15 to Feb 13 in 2006). Comparison between observations and the simulation indicates that the model is able to reproduce the main features of the particle number size distribution. The high number concentration of total particles, up to 26600 cm −3 in observations and 39800 cm −3 in the simulation, indicates the severity of pollution in Beijing. We find that primary particles with secondary species coating and secondary particles together control the particle number size distribution. Secondary particles dominate particle number concentration in the nucleation mode. Primary and secondary particles together determine the temporal evolution and spatial pattern of particle number concentration in the Aitken mode. Primary particles dominate particle number concentration in the accumulation mode. Over Beijing and surrounding areas, secondary particles contribute at least 80% of particle number concentration in the nucleation mode but only 10–20% in the accumulation mode. Nucleation mode particles and accumulation mode particles are anti-phased with each other. Nucleation or primary emissions alone could not explain the formation of the particle number size distribution in Beijing. Nucleation has larger effects on ultrafine particles while primary particles emissions are efficient in producing large particles in the accumulation mode. Reduction in primary particle emissions does not always lead to a decrease in the number concentration of ultrafine particles. Measures to reduce fine particle pollution in terms of particle number concentration may be different from those addressing particle mass concentration. - Highlights:

  10. A virtual laboratory notebook for simulation models.

    Science.gov (United States)

    Winfield, A J

    1998-01-01

    In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.

  11. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  12. A Modeling Framework for Improved Characterization of Near-Road Exposure at Fine Scales

    Science.gov (United States)

    Traffic-related air pollutants could cause adverse health impact to communities near roadways. To estimate the population risk and locate "hotspots" in the near-road environment, quantifying the exposure at a fine spatial resolution is essential. A new state-of-the-art ...

  13. Modeling mussel bed influence on fine sediment dynamics on a Wadden Sea intertidal flat

    NARCIS (Netherlands)

    van Leeuwen, Bas; Augustijn, Dionysius C.M.; van Wesenbeeck, Bregje K.; Hulscher, Suzanne J.M.H.; de Vries, Mindert

    2008-01-01

    Mussel beds are coherent colonies of mussels and are widespread in the Dutch Wadden Sea and the Eastern Scheldt estuary. Mussel beds are known to be an important factor in biogeomorphological processes, primarily because of the influence on fine sediment dynamics. Ongoing research to explore the use

  14. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...

  15. Health impact assessment of particulate pollution in Tallinn using fine spatial resolution and modeling techniques

    Directory of Open Access Journals (Sweden)

    Kimmel Veljo

    2009-03-01

    Full Text Available Abstract Background Health impact assessments (HIA use information on exposure, baseline mortality/morbidity and exposure-response functions from epidemiological studies in order to quantify the health impacts of existing situations and/or alternative scenarios. The aim of this study was to improve HIA methods for air pollution studies in situations where exposures can be estimated using GIS with high spatial resolution and dispersion modeling approaches. Methods Tallinn was divided into 84 sections according to neighborhoods, with a total population of approx. 390 000 persons. Actual baseline rates for total mortality and hospitalization with cardiovascular and respiratory diagnosis were identified. The exposure to fine particles (PM2.5 from local emissions was defined as the modeled annual levels. The model validation and morbidity assessment were based on 2006 PM10 or PM2.5 levels at 3 monitoring stations. The exposure-response coefficients used were for total mortality 6.2% (95% CI 1.6–11% per 10 μg/m3 increase of annual mean PM2.5 concentration and for the assessment of respiratory and cardiovascular hospitalizations 1.14% (95% CI 0.62–1.67% and 0.73% (95% CI 0.47–0.93% per 10 μg/m3 increase of PM10. The direct costs related to morbidity were calculated according to hospital treatment expenses in 2005 and the cost of premature deaths using the concept of Value of Life Year (VOLY. Results The annual population-weighted-modeled exposure to locally emitted PM2.5 in Tallinn was 11.6 μg/m3. Our analysis showed that it corresponds to 296 (95% CI 76528 premature deaths resulting in 3859 (95% CI 10236636 Years of Life Lost (YLL per year. The average decrease in life-expectancy at birth per resident of Tallinn was estimated to be 0.64 (95% CI 0.17–1.10 years. While in the polluted city centre this may reach 1.17 years, in the least polluted neighborhoods it remains between 0.1 and 0.3 years. When dividing the YLL by the number of

  16. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  17. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  18. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  19. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  20. Dynamic modeling and simulation of wind turbines

    International Nuclear Information System (INIS)

    Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.

    2002-01-01

    Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator

  1. Regional model simulations of New Zealand climate

    Science.gov (United States)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  2. Landscape Modelling and Simulation Using Spatial Data

    Directory of Open Access Journals (Sweden)

    Amjed Naser Mohsin AL-Hameedawi

    2017-08-01

    Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.

  3. Modelling and numerical simulation of the General Dynamic Equation of aerosols; Modelisation et simulation des aerosols atmospheriques

    Energy Technology Data Exchange (ETDEWEB)

    Debry, E.

    2005-01-15

    Chemical-transport models are now able to describe in a realistic way gaseous pollutants behavior in the atmosphere. Nevertheless atmospheric pollution also exists as fine suspended particles, called aerosols, which interact with gaseous phase, solar radiation, and have their own dynamic behavior. The goal of this thesis is the modelling and numerical simulation of the General Dynamic Equation of aerosols (GDE). Part I deals with some theoretical aspects of aerosol modelling. Part II is dedicated to the building of one size resolved aerosol model (SIREAM). In part III we perform the reduction of this model in order to use it in dispersion models as POLAIR3D. Several modelling issues are still opened: organic aerosol matter, externally mixed aerosols, coupling with turbulent mixing, and nano-particles. (author)

  4. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  5. A queuing model for road traffic simulation

    International Nuclear Information System (INIS)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.

    2015-01-01

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme

  6. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  7. Modeling and simulation goals and accomplishments

    International Nuclear Information System (INIS)

    Turinsky, P.

    2013-01-01

    The CASL (Consortium for Advanced Simulation of Light Water Reactors) mission is to develop and apply the Virtual Reactor simulator (VERA) to optimise nuclear power in terms of capital and operating costs, of nuclear waste production and of nuclear safety. An efficient and reliable virtual reactor simulator relies on 3-dimensional calculations, accurate physics models and code coupling. Advances in computer hardware, along with comparable advances in numerical solvers make the VERA project achievable. This series of slides details the VERA project and presents the specificities and performance of the codes involved in the project and ends by listing the computing needs

  8. Elastoplastic model for unsaturated, quasi-saturated and fully saturated fine soils

    Directory of Open Access Journals (Sweden)

    Lai Ba Tien

    2016-01-01

    Full Text Available In unsaturated soils, the gaseous phase is commonly assumed to be continuous. This assumption is no more valid at high saturation ratio. In that case, air bubbles and pockets can be trapped in the porous network by the liquid phase and the gas phase becomes discontinuous. This trapped air reduces the apparent compressibility of the pore fluid and affect the mechanical behavior of the soil. Although it is trapped in the pores, its dissolution can take place. Dissolved air can migrate through the pore space, either by following the flow of the fluid or by diffusion. In this context, this paper present a hydro mechanical model that separately considers the kinematics and the mechanical behavior of each fluid species (eg liquid water, dissolved air, gaseous air and the solid matrix. This new model was implemented in a C++ code. Some numerical simulations are performed to demonstrate the ability of this model to reproduce a continuous transition of unsaturated to saturated states.

  9. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    Science.gov (United States)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  10. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  11. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  12. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  13. Transport of reservoir fines

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander; Stenby, Erling Halfdan

    Modeling transport of reservoir fines is of great importance for evaluating the damage of production wells and infectivity decline. The conventional methodology accounts for neither the formation heterogeneity around the wells nor the reservoir fines’ heterogeneity. We have developed an integral...... dispersion equation in modeling the transport and the deposition of reservoir fines. It successfully predicts the unsymmetrical concentration profiles and the hyperexponential deposition in experiments....

  14. New exploration on TMSR: modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Si, S.; Chen, Q.; Bei, H.; Zhao, J., E-mail: ssy@snerdi.com.cn [Shanghai Nuclear Engineering Research & Design Inst., Shanghai (China)

    2015-07-01

    A tightly coupled multi-physics model for MSR (Molten Salt Reactor) system involving the reactor core and the rest of the primary loop has been developed and employed in an in-house developed computer code TANG-MSR. In this paper, the computer code is used to simulate the behavior of steady state operation and transient for our redesigned TMSR. The presented simulation results demonstrate that the models employed in TANG-MSR can capture major physics phenomena in MSR and the redesigned TMSR has excellent performance of safety and sustainability. (author)

  15. Atomic quantum simulation of the lattice gauge-Higgs model: Higgs couplings and emergence of exact local gauge symmetry.

    Science.gov (United States)

    Kasamatsu, Kenichi; Ichinose, Ikuo; Matsui, Tetsuo

    2013-09-13

    Recently, the possibility of quantum simulation of dynamical gauge fields was pointed out by using a system of cold atoms trapped on each link in an optical lattice. However, to implement exact local gauge invariance, fine-tuning the interaction parameters among atoms is necessary. In the present Letter, we study the effect of violation of the U(1) local gauge invariance by relaxing the fine-tuning of the parameters and showing that a wide variety of cold atoms is still a faithful quantum simulator for a U(1) gauge-Higgs model containing a Higgs field sitting on sites. The clarification of the dynamics of this gauge-Higgs model sheds some light upon various unsolved problems, including the inflation process of the early Universe. We study the phase structure of this model by Monte Carlo simulation and also discuss the atomic characteristics of the Higgs phase in each simulator.

  16. Nuclear reactor core modelling in multifunctional simulators

    International Nuclear Information System (INIS)

    Puska, E.K.

    1999-01-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  17. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  18. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    Science.gov (United States)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  19. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  20. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    Alejandro, R.; Udbinac, M.J.

    2006-01-01

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  1. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  2. The Sun-Earth connect 2: Modelling patterns of a fractal Sun in time and space using the fine structure constant

    Science.gov (United States)

    Baker, Robert G. V.

    2017-02-01

    Self-similar matrices of the fine structure constant of solar electromagnetic force and its inverse, multiplied by the Carrington synodic rotation, have been previously shown to account for at least 98% of the top one hundred significant frequencies and periodicities observed in the ACRIM composite irradiance satellite measurement and the terrestrial 10.7cm Penticton Adjusted Daily Flux data sets. This self-similarity allows for the development of a time-space differential equation (DE) where the solutions define a solar model for transmissions through the core, radiative, tachocline, convective and coronal zones with some encouraging empirical and theoretical results. The DE assumes a fundamental complex oscillation in the solar core and that time at the tachocline is smeared with real and imaginary constructs. The resulting solutions simulate for tachocline transmission, the solar cycle where time-line trajectories either 'loop' as Hermite polynomials for an active Sun or 'tail' as complementary error functions for a passive Sun. Further, a mechanism that allows for the stable energy transmission through the tachocline is explored and the model predicts the initial exponential coronal heating from nanoflare supercharging. The twisting of the field at the tachocline is then described as a quaternion within which neutrinos can oscillate. The resulting fractal bubbles are simulated as a Julia Set which can then aggregate from nanoflares into solar flares and prominences. Empirical examples demonstrate that time and space fractals are important constructs in understanding the behaviour of the Sun, from the impact on climate and biological histories on Earth, to the fractal influence on the spatial distributions of the solar system. The research suggests that there is a fractal clock underpinning solar frequencies in packages defined by the fine structure constant, where magnetic flipping and irradiance fluctuations at phase changes, have periodically impacted on the

  3. Simulation modeling and analysis in safety. II

    International Nuclear Information System (INIS)

    Ayoub, M.A.

    1981-01-01

    The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)

  4. Modeling salmonella Dublin into the dairy herd simulation model Simherd

    DEFF Research Database (Denmark)

    Kudahl, Anne Braad

    2010-01-01

    Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...

  5. Administration of Oxygen Ultra-Fine Bubbles Improves Nerve Dysfunction in a Rat Sciatic Nerve Crush Injury Model

    Directory of Open Access Journals (Sweden)

    Hozo Matsuoka

    2018-05-01

    Full Text Available Ultra-fine bubbles (<200 nm in diameter have several unique properties and have been tested in various medical fields. The purpose of this study was to investigate the effects of oxygen ultra-fine bubbles (OUBs on a sciatic nerve crush injury (SNC model rats. Rats were intraperitoneally injected with 1.5 mL saline, OUBs diluted in saline, or nitrogen ultra-fine bubbles (NUBs diluted in saline three times per week for 4 weeks in four groups: (1 control, (sham operation + saline; (2 SNC, (crush + saline; (3 SNC+OUB, (crush + OUB-saline; (4 SNC+NUB, (crush + NUB-saline. The effects of the OUBs on dorsal root ganglion (DRG neurons and Schwann cells (SCs were examined by serial dilution of OUB medium in vitro. Sciatic functional index, paw withdrawal thresholds, nerve conduction velocity, and myelinated axons were significantly decreased in the SNC group compared to the control group; these parameters were significantly improved in the SNC+OUB group, although NUB treatment did not affect these parameters. In vitro, OUBs significantly promoted neurite outgrowth in DRG neurons by activating AKT signaling and SC proliferation by activating ERK1/2 and JNK/c-JUN signaling. OUBs may improve nerve dysfunction in SNC rats by promoting neurite outgrowth in DRG neurons and SC proliferation.

  6. One-loop analysis of the electroweak breaking in supersymmetric models and the fine-tuning problem

    CERN Document Server

    De Carlos, B

    1993-01-01

    We examine the electroweak breaking mechanism in the minimal supersymmetric standard model (MSSM) using the {\\em complete} one-loop effective potential $V_1$. First, we study what is the region of the whole MSSM parameter space (i.e. $M_{1/2},m_o,\\mu,...$) that leads to a succesful $SU(2)\\times U(1)$ breaking with an acceptable top quark mass. In doing this it is observed that all the one-loop corrections to $V_1$ (even the apparently small ones) must be taken into account in order to get reliable results. We find that the allowed region of parameters is considerably enhanced with respect to former "improved" tree level results. Next, we study the fine-tuning problem associated with the high sensitivity of $M_Z$ to $h_t$ (the top Yukawa coupling). Again, we find that this fine-tuning is appreciably smaller once the one-loop effects are considered than in previous tree level calculations. Finally, we explore the ambiguities and limitations of the ordinary criterion to estimate the degree of fine-tuning. As a r...

  7. A universal simulator for ecological models

    DEFF Research Database (Denmark)

    Holst, Niels

    2013-01-01

    Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....

  8. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  9. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  10. A SIMULATION MODEL OF THE GAS COMPLEX

    Directory of Open Access Journals (Sweden)

    Sokolova G. E.

    2016-06-01

    Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.

  11. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  12. Advanced feeder control using fast simulation models

    NARCIS (Netherlands)

    Verheijen, O.S.; Op den Camp, O.M.G.C.; Beerkens, R.G.C.; Backx, A.C.P.M.; Huisman, L.; Drummond, C.H.

    2005-01-01

    For the automatic control of glass quality in glass production, the relation between process variable and product or glass quality and process conditions/process input parameters must be known in detail. So far, detailed 3-D glass melting simulation models were used to predict the effect of process

  13. Modeling and Simulating Virtual Anatomical Humans

    NARCIS (Netherlands)

    Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan

    2014-01-01

    This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main

  14. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  15. Thermohydraulic modeling and simulation of breeder reactors

    International Nuclear Information System (INIS)

    Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.

    1982-01-01

    This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed

  16. Seasonal variation in coastal marine habitat use by the European shag: Insights from fine scale habitat selection modeling and diet

    Science.gov (United States)

    Michelot, Candice; Pinaud, David; Fortin, Matthieu; Maes, Philippe; Callard, Benjamin; Leicher, Marine; Barbraud, Christophe

    2017-07-01

    Studies of habitat selection by higher trophic level species are necessary for using top predator species as indicators of ecosystem functioning. However, contrary to terrestrial ecosystems, few habitat selection studies have been conducted at a fine scale for coastal marine top predator species, and fewer have coupled diet data with habitat selection modeling to highlight a link between prey selection and habitat use. The aim of this study was to characterize spatially and oceanographically, at a fine scale, the habitats used by the European Shag Phalacrocorax aristotelis in the Special Protection Area (SPA) of Houat-Hœdic in the Mor Braz Bay during its foraging activity. Habitat selection models were built using in situ observation data of foraging shags (transect sampling) and spatially explicit environmental data to characterize marine benthic habitats. Observations were first adjusted for detectability biases and shag abundance was subsequently spatialized. The influence of habitat variables on shag abundance was tested using Generalized Linear Models (GLMs). Diet data were finally confronted to habitat selection models. Results showed that European shags breeding in the Mor Braz Bay changed foraging habitats according to the season and to the different environmental and energetic constraints. The proportion of the main preys also varied seasonally. Rocky and coarse sand habitats were clearly preferred compared to fine or muddy sand habitats. Shags appeared to be more selective in their foraging habitats during the breeding period and the rearing of chicks, using essentially rocky areas close to the colony and consuming preferentially fish from the Labridae family and three other fish families in lower proportions. During the post-breeding period shags used a broader range of habitats and mainly consumed Gadidae. Thus, European shags seem to adjust their feeding strategy to minimize energetic costs, to avoid intra-specific competition and to maximize access

  17. Modeling Supermassive Black Holes in Cosmological Simulations

    Science.gov (United States)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  18. Advances in NLTE Modeling for Integrated Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  19. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  20. Pelletization of fine coals. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sastry, K.V.S.

    1995-12-31

    Coal is one of the most abundant energy resources in the US with nearly 800 million tons of it being mined annually. Process and environmental demands for low-ash, low-sulfur coals and economic constraints for high productivity are leading the coal industry to use such modern mining methods as longwall mining and such newer coal processing techniques as froth flotation, oil agglomeration, chemical cleaning and synthetic fuel production. All these processes are faced with one common problem area--fine coals. Dealing effectively with these fine coals during handling, storage, transportation, and/or processing continues to be a challenge facing the industry. Agglomeration by the unit operation of pelletization consists of tumbling moist fines in drums or discs. Past experimental work and limited commercial practice have shown that pelletization can alleviate the problems associated with fine coals. However, it was recognized that there exists a serious need for delineating the fundamental principles of fine coal pelletization. Accordingly, a research program has been carried involving four specific topics: (i) experimental investigation of coal pelletization kinetics, (ii) understanding the surface principles of coal pelletization, (iii) modeling of coal pelletization processes, and (iv) simulation of fine coal pelletization circuits. This report summarizes the major findings and provides relevant details of the research effort.

  1. Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata

    Directory of Open Access Journals (Sweden)

    Yu Zou

    2013-07-01

    Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.

  2. Mesoscopic modelling and simulation of soft matter.

    Science.gov (United States)

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  3. Numerical model simulation of atmospheric coolant plumes

    International Nuclear Information System (INIS)

    Gaillard, P.

    1980-01-01

    The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr

  4. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  5. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  6. Advancing Material Models for Automotive Forming Simulations

    International Nuclear Information System (INIS)

    Vegter, H.; An, Y.; Horn, C.H.L.J. ten; Atzema, E.H.; Roelofsen, M.E.

    2005-01-01

    Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path.The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary.Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials.Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations prior

  7. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  8. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect...... incomplete knowledge of the characteristics inherent to each model. During water immersion, the hydrostatic pressure lowers the peripheral vascular capacity and causes increased thoracic blood volume and high vascular perfusion. In turn, these changes lead to high urinary flow, low vasomotor tone, and a high...

  9. Mathematical model of consolidation of fine concrete mixtures with different mobility, casted by vacuumizing and axial pressing in layers

    Directory of Open Access Journals (Sweden)

    Dedeneva Elena

    2017-01-01

    Full Text Available A mathematical model allowing establishing regularities in the consolidation processes of fine-grained concrete mixtures with different mobility and compaction methods has been worked out. This study is based on two-phase systems and nonlinear character of their consolidation. It resolves the question of the choice of vacuumizing optimal parameters and axial pressing in layers for molding of thin-walled products such as concrete roof tiles and concrete pipe products. Finally, we can get products without heat treatment by the materials and energy-saving technologies.

  10. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  11. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  12. Simulation as a surgical teaching model.

    Science.gov (United States)

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Modeling the migration of fallout radionuclides to quantify the contemporary transfer of fine particles in Luvisol profiles under different land uses and farming practices

    International Nuclear Information System (INIS)

    Jagercikova, M.; Balesdent, J.; Cornu, S.; Evrard, O.; Lefevre, I.

    2014-01-01

    Soil mixing and the downward movement of solid matter in soils are dynamic pedological processes that strongly affect the vertical distribution of all soil properties across the soil profile. These processes are affected by land use and the implementation of various farming practices, but their kinetics have rarely been quantified. Our objective was to investigate the vertical transfer of matter in Luvisols at long-term experimental sites under different land uses (cropland, grassland and forest) and different farming practices (conventional tillage, reduced tillage and no tillage). To investigate these processes, the vertical radionuclide distributions of 137 Cs and 210 Pb (xs) were analyzed in 9 soil profiles. The mass balance calculations showed that as much as 91± 9% of the 137 Cs was linked to the fine particles (2 mm). To assess the kinetics of radionuclide redistribution in soil, we modeled their depth profiles using a convection-diffusion equation. The diffusion coefficient represented the rate of bioturbation, and the convection velocity provided a proxy for fine particle leaching. Both parameters were modeled as either constant or variable with depth. The tillage was simulated using an empirical formula that considered the tillage depth and a variable mixing ratio depending on the type of tillage used. A loss of isotopes due to soil erosion was introduced into the model to account for the total radionuclide inventory. All of these parameters were optimized based on the 137 Cs data and were then subsequently applied to the 210 Pb (xs) data. Our results show that the 137 Cs isotopes migrate deeper under grasslands than under forests or croplands. Additionally, our results suggest that the diffusion coefficient decreased with depth and that it remained negligible below the tillage depth at the cropland sites, below 20 cm in the forest sites, and below 80 cm in the grassland sites. (authors)

  14. Modeling and simulation of photovoltaic solar panel

    International Nuclear Information System (INIS)

    Belarbi, M.; Haddouche, K.; Midoun, A.

    2006-01-01

    In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit, open circuit, and maximum power. In the first step, the adopted approach concerns the resolution of the system of equations constituting the three operating points to write all the model parameters according to series resistance. Secondly, we make an iterative resolution at the optimal operating point by using the Newton-Raphson method to calculate the series resistance value as well as the model parameters. Once the panel model is identified, we consider other equations for taking into account the irradiance and temperature effect. The simulation results show the convergence speed of the model parameters and the possibility of visualizing the electrical behaviour of the panel according to the irradiance and temperature. Let us note that a sensitivity of the algorithm at the optimal operating point was observed owing to the fact that a small variation of the optimal voltage value leads to a very great variation of the identified parameters values. With the identified model, we can develop algorithms of maximum power point tracking, and make simulations of a solar water pumping system.(Author)

  15. Deep Drawing Simulations With Different Polycrystalline Models

    Science.gov (United States)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  16. Facebook's personal page modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.

  17. Anne Fine

    Directory of Open Access Journals (Sweden)

    Philip Gaydon

    2015-04-01

    Full Text Available An interview with Anne Fine with an introduction and aside on the role of children’s literature in our lives and development, and our adult perceptions of the suitability of childhood reading material. Since graduating from Warwick in 1968 with a BA in Politics and History, Anne Fine has written over fifty books for children and eight for adults, won the Carnegie Medal twice (for Goggle-Eyes in 1989 and Flour Babies in 1992, been a highly commended runner-up three times (for Bill’s New Frock in 1989, The Tulip Touch in 1996, and Up on Cloud Nine in 2002, been shortlisted for the Hans Christian Andersen Award (the highest recognition available to a writer or illustrator of children’s books, 1998, undertaken the positon of Children’s Laureate (2001-2003, and been awarded an OBE for her services to literature (2003. Warwick presented Fine with an Honorary Doctorate in 2005. Philip Gaydon’s interview with Anne Fine was recorded as part of the ‘Voices of the University’ oral history project, co-ordinated by Warwick’s Institute of Advanced Study.

  18. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  19. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  20. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  1. Large, high-intensity fire events in Southern California shrublands: Debunking the fine-grain age patch model

    Science.gov (United States)

    Keeley, J.E.; Zedler, P.H.

    2009-01-01

    We evaluate the fine-grain age patch model of fire regimes in southern California shrublands. Proponents contend that the historical condition was characterized by frequent small to moderate size, slow-moving smoldering fires, and that this regime has been disrupted by fire suppression activities that have caused unnatural fuel accumulation and anomalously large and catastrophic wildfires. A review of more than 100 19th-century newspaper reports reveals that large, high-intensity wildfires predate modern fire suppression policy, and extensive newspaper coverage plus first-hand accounts support the conclusion that the 1889 Santiago Canyon Fire was the largest fire in California history. Proponents of the fine-grain age patch model contend that even the very earliest 20th-century fires were the result of fire suppression disrupting natural fuel structure. We tested that hypothesis and found that, within the fire perimeters of two of the largest early fire events in 1919 and 1932, prior fire suppression activities were insufficient to have altered the natural fuel structure. Over the last 130 years there has been no significant change in the incidence of large fires greater than 10000 ha, consistent with the conclusion that fire suppression activities are not the cause of these fire events. Eight megafires (???50 000 ha) are recorded for the region, and half have occurred in the last five years. These burned through a mosaic of age classes, which raises doubts that accumulation of old age classes explains these events. Extreme drought is a plausible explanation for this recent rash of such events, and it is hypothesized that these are due to droughts that led to increased dead fine fuels that promoted the incidence of firebrands and spot fires. A major shortcoming of the fine-grain age patch model is that it requires age-dependent flammability of shrubland fuels, but seral stage chaparral is dominated by short-lived species that create a dense surface layer of fine

  2. Inflation with improved D3-brane potential and the fine tunings associated with the model

    International Nuclear Information System (INIS)

    Ali, Amna; Sami, M.; Deshamukhya, Atri; Panda, Sudhakar

    2011-01-01

    We revisit our earlier investigations of the brane-antibrane inflation in a warped deformed conifold background, reported in Phys. Lett. B 674, 131 (2009), where now we include the contributions to the inflation potential arising from imaginary anti-self-dual (IASD) fluxes including the term with irrational scaling dimension discovered recently in arXiv:0912.4268 and arXiv:1001.5028. We observe that these corrections to the effective potential help in relaxing the severe fine tunings associated with the earlier analysis. Required number of e-folds, observational constraint on COBE normalization and low value of the tensor to scalar ratio are achieved which is consistent with WMAP seven years data. (orig.)

  3. NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN

    Directory of Open Access Journals (Sweden)

    Petr Chmátal

    2016-04-01

    Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.

  4. eShopper modeling and simulation

    Science.gov (United States)

    Petrushin, Valery A.

    2001-03-01

    The advent of e-commerce gives an opportunity to shift the paradigm of customer communication into a highly interactive mode. The new generation of commercial Web servers, such as the Blue Martini's server, combines the collection of data on a customer behavior with real-time processing and dynamic tailoring of a feedback page. The new opportunities for direct product marketing and cross selling are arriving. The key problem is what kind of information do we need to achieve these goals, or in other words, how do we model the customer? The paper is devoted to customer modeling and simulation. The focus is on modeling an individual customer. The model is based on the customer's transaction data, click stream data, and demographics. The model includes the hierarchical profile of a customer's preferences to different types of products and brands; consumption models for the different types of products; the current focus, trends, and stochastic models for time intervals between purchases; product affinity models; and some generalized features, such as purchasing power, sensitivity to advertising, price sensitivity, etc. This type of model is used for predicting the date of the next visit, overall spending, and spending for different types of products and brands. For some type of stores (for example, a supermarket) and stable customers, it is possible to forecast the shopping lists rather accurately. The forecasting techniques are discussed. The forecasting results can be used for on- line direct marketing, customer retention, and inventory management. The customer model can also be used as a generative model for simulating the customer's purchasing behavior in different situations and for estimating customer's features.

  5. A collision model in plasma particle simulations

    International Nuclear Information System (INIS)

    Ma Yanyun; Chang Wenwei; Yin Yan; Yue Zongwu; Cao Lihua; Liu Daqing

    2000-01-01

    In order to offset the collisional effects reduced by using finite-size particles, β particle clouds are used in particle simulation codes (β is the ratio of charge or mass of modeling particles to real ones). The method of impulse approximation (strait line orbit approximation) is used to analyze the scattering cross section of β particle clouds plasmas. The authors can obtain the relation of the value of a and β and scattering cross section (a is the radius of β particle cloud). By using this relation the authors can determine the value of a and β so that the collisional effects of the modeling system is correspondent with the real one. The authors can also adjust the values of a and β so that the authors can enhance or reduce the collisional effects fictitiously. The results of simulation are in good agreement with the theoretical ones

  6. Macro Level Simulation Model Of Space Shuttle Processing

    Science.gov (United States)

    2000-01-01

    The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.

  7. High-Fidelity Roadway Modeling and Simulation

    Science.gov (United States)

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  8. Flow simulation in piping system dead legs using second moment, closure and k-epsilon model

    International Nuclear Information System (INIS)

    Deutsch, E.; Mechitoua, N.; Mattei, J.D.

    1996-01-01

    This paper deals with an industrial application of second moment closure turbulence model in in numerical simulation of 3D turbulent flows in piping system dead legs. Calculations performed with the 3D ESTET code are presented which contrast the performance of k-epsilon eddy viscosity model and second moment closure turbulence models. Coarse (100 000), medium (400 000) and fine (1 500 000) meshes were used. The second moment closure performs significantly better than eddy viscosity model and predicts with a good agreement the vortex penetration in dead legs provided to use sufficiently refined meshes. The results point out the necessity to be able to perform calculations using fine mesh before introducing refined physical models such as second moment closure turbulence model in a numerical code. This study illustrates the ability of second moment closure turbulence model to simulate 3D turbulent industrial flows. Reynolds stress model computation does not require special care, the calculation is carried on as simply as the k-ξ one. The CPU time needed is less that twice the CPU time needed using k-ξ model. (authors)

  9. Difficulties with True Interoperability in Modeling & Simulation

    Science.gov (United States)

    2011-12-01

    Standards in M&S cover multiple layers of technical abstraction. There are middleware specifica- tions, such as the High Level Architecture (HLA) ( IEEE Xplore ... IEEE Xplore Digital Library. 2010. 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Framework and Rules...using different communication protocols being able to allow da- 2642978-1-4577-2109-0/11/$26.00 ©2011 IEEE Report Documentation Page Form ApprovedOMB No

  10. Agent Based Modelling for Social Simulation

    OpenAIRE

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...

  11. Mathematical models for photovoltaic solar panel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Jose Airton A. dos; Gnoatto, Estor; Fischborn, Marcos; Kavanagh, Edward [Universidade Tecnologica Federal do Parana (UTFPR), Medianeira, PR (Brazil)], Emails: airton@utfpr.edu.br, gnoatto@utfpr.edu.br, fisch@utfpr.edu.br, kavanagh@utfpr.edu.br

    2008-07-01

    A photovoltaic generator is subject to several variations of solar intensity, ambient temperature or load, that change your point of operation. This way, your behavior should be analyzed by such alterations, to optimize your operation. The present work sought to simulate a photovoltaic generator, of polycrystalline silicon, by characteristics supplied by the manufacturer, and to compare the results of two mathematical models with obtained values of field, in the city of Cascavel, for a period of one year. (author)

  12. Simulation of Near-Edge X-ray Absorption Fine Structure with Time-Dependent Equation-of-Motion Coupled-Cluster Theory.

    Science.gov (United States)

    Nascimento, Daniel R; DePrince, A Eugene

    2017-07-06

    An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.

  13. Modelling interplanetary CMEs using magnetohydrodynamic simulations

    Directory of Open Access Journals (Sweden)

    P. J. Cargill

    Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.

    Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies

  14. Interactive Modelling and Simulation of Human Motion

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol

    menneskers led, der udviser både ikke-konveksitet og flere frihedsgrader • En generel og alsidig model for aktivering af bløde legemer. Modellen kan anvendes som et animations værktøj, men er lige så velegnet til simulering af menneskelige muskler, da den opfylder de grundlæggende fysiske principper......Dansk resumé Denne ph.d.-afhandling beskæftiger sig med modellering og simulation af menneskelig bevægelse. Emnerne i denne afhandling har mindst to ting til fælles. For det første beskæftiger de sig med menneskelig bevægelse. Selv om de udviklede modeller også kan benyttes til andre ting,er det...... primære fokus på at modellere den menneskelige krop. For det andet, beskæftiger de sig alle med simulering som et redskab til at syntetisere bevægelse og dermed skabe animationer. Dette er en vigtigt pointe, da det betyder, at vi ikke kun skaber værktøjer til animatorer, som de kan bruge til at lave sjove...

  15. MODELING AND SIMULATION OF A HYDROCRACKING UNIT

    Directory of Open Access Journals (Sweden)

    HASSAN A. FARAG

    2016-06-01

    Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.

  16. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    Science.gov (United States)

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle

  17. On Improving 4-km Mesoscale Model Simulations

    Science.gov (United States)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6

  18. Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models

    Directory of Open Access Journals (Sweden)

    S. Song

    2018-05-01

    Full Text Available pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic to as high as 7 (neutral. In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between −2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase measurements as inputs (i.e., forward mode are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5 for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid or metastable (only liquid, does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption resulted from coding errors in the model, which have been identified and fixed in this study.

  19. Reactive transport models and simulation with ALLIANCES

    International Nuclear Information System (INIS)

    Leterrier, N.; Deville, E.; Bary, B.; Trotignon, L.; Hedde, T.; Cochepin, B.; Stora, E.

    2009-01-01

    Many chemical processes influence the evolution of nuclear waste storage. As a result, simulations based only upon transport and hydraulic processes fail to describe adequately some industrial scenarios. We need to take into account complex chemical models (mass action laws, kinetics...) which are highly non-linear. In order to simulate the coupling of these chemical reactions with transport, we use a classical Sequential Iterative Approach (SIA), with a fixed point algorithm, within the mainframe of the ALLIANCES platform. This approach allows us to use the various transport and chemical modules available in ALLIANCES, via an operator-splitting method based upon the structure of the chemical system. We present five different applications of reactive transport simulations in the context of nuclear waste storage: 1. A 2D simulation of the lixiviation by rain water of an underground polluted zone high in uranium oxide; 2. The degradation of the steel envelope of a package in contact with clay. Corrosion of the steel creates corrosion products and the altered package becomes a porous medium. We follow the degradation front through kinetic reactions and the coupling with transport; 3. The degradation of a cement-based material by the injection of an aqueous solution of zinc and sulphate ions. In addition to the reactive transport coupling, we take into account in this case the hydraulic retroaction of the porosity variation on the Darcy velocity; 4. The decalcification of a concrete beam in an underground storage structure. In this case, in addition to the reactive transport simulation, we take into account the interaction between chemical degradation and the mechanical forces (cracks...), and the retroactive influence on the structure changes on transport; 5. The degradation of the steel envelope of a package in contact with a clay material under a temperature gradient. In this case the reactive transport simulation is entirely directed by the temperature changes and

  20. Consolidation modelling for thermoplastic composites forming simulation

    Science.gov (United States)

    Xiong, H.; Rusanov, A.; Hamila, N.; Boisse, P.

    2016-10-01

    Pre-impregnated thermoplastic composites are widely used in the aerospace industry for their excellent mechanical properties, Thermoforming thermoplastic prepregs is a fast manufacturing process, the automotive industry has shown increasing interest in this manufacturing processes, in which the reconsolidation is an essential stage. The model of intimate contact is investigated as the consolidation model, compression experiments have been launched to identify the material parameters, several numerical tests show the influents of the temperature and pressure applied during processing. Finally, a new solid-shell prismatic element has been presented for the simulation of consolidation step in the thermoplastic composites forming process.

  1. Quantification of uncertainties of modeling and simulation

    International Nuclear Information System (INIS)

    Ma Zhibo; Yin Jianwei

    2012-01-01

    The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)

  2. Simulation models generator. Applications in scheduling

    Directory of Open Access Journals (Sweden)

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  3. Modeling and simulation of reactive flows

    CERN Document Server

    Bortoli, De AL; Pereira, Felipe

    2015-01-01

    Modelling and Simulation of Reactive Flows presents information on modeling and how to numerically solve reactive flows. The book offers a distinctive approach that combines diffusion flames and geochemical flow problems, providing users with a comprehensive resource that bridges the gap for scientists, engineers, and the industry. Specifically, the book looks at the basic concepts related to reaction rates, chemical kinetics, and the development of reduced kinetic mechanisms. It considers the most common methods used in practical situations, along with equations for reactive flows, and va

  4. Nonlinear friction model for servo press simulation

    Science.gov (United States)

    Ma, Ninshu; Sugitomo, Nobuhiko; Kyuno, Takunori; Tamura, Shintaro; Naka, Tetsuo

    2013-12-01

    The friction coefficient was measured under an idealized condition for a pulse servo motion. The measured friction coefficient and its changing with both sliding distance and a pulse motion showed that the friction resistance can be reduced due to the re-lubrication during unloading process of the pulse servo motion. Based on the measured friction coefficient and its changes with sliding distance and re-lubrication of oil, a nonlinear friction model was developed. Using the newly developed the nonlinear friction model, a deep draw simulation was performed and the formability was evaluated. The results were compared with experimental ones and the effectiveness was verified.

  5. VAR IPP-IPC Model Simulation

    Directory of Open Access Journals (Sweden)

    Juan P. Pérez Monsalve

    2014-12-01

    Full Text Available This work analyzed the relationship of the two main Price indicators in the Colombian economy, the IPP and the IPC. For this purpose, we identified the theory comprising both indexes to then develop a vector autoregressive model, which shows the reaction to shocks both in itself as in the other variable, whose impact continues propagating in the long term. Additionally, the work presents a simulation of the VAR model through the Monte Carlo method, verifying the coincidence in distributions of probability and volatility levels, as well as the existence correlation over time

  6. TMS modeling toolbox for realistic simulation.

    Science.gov (United States)

    Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong

    2010-01-01

    Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.

  7. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  8. Crop Yield Simulations Using Multiple Regional Climate Models in the Southwestern United States

    Science.gov (United States)

    Stack, D.; Kafatos, M.; Kim, S.; Kim, J.; Walko, R. L.

    2013-12-01

    Agricultural productivity (described by crop yield) is strongly dependent on climate conditions determined by meteorological parameters (e.g., temperature, rainfall, and solar radiation). California is the largest producer of agricultural products in the United States, but crops in associated arid and semi-arid regions live near their physiological limits (e.g., in hot summer conditions with little precipitation). Thus, accurate climate data are essential in assessing the impact of climate variability on agricultural productivity in the Southwestern United States and other arid regions. To address this issue, we produced simulated climate datasets and used them as input for the crop production model. For climate data, we employed two different regional climate models (WRF and OLAM) using a fine-resolution (8km) grid. Performances of the two different models are evaluated in a fine-resolution regional climate hindcast experiment for 10 years from 2001 to 2010 by comparing them to the North American Regional Reanalysis (NARR) dataset. Based on this comparison, multi-model ensembles with variable weighting are used to alleviate model bias and improve the accuracy of crop model productivity over large geographic regions (county and state). Finally, by using a specific crop-yield simulation model (APSIM) in conjunction with meteorological forcings from the multi-regional climate model ensemble, we demonstrate the degree to which maize yields are sensitive to the regional climate in the Southwestern United States.

  9. A Pore Scale Flow Simulation of Reconstructed Model Based on the Micro Seepage Experiment

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-01-01

    Full Text Available Researches on microscopic seepage mechanism and fine description of reservoir pore structure play an important role in effective development of low and ultralow permeability reservoir. The typical micro pore structure model was established by two ways of the conventional model reconstruction method and the built-in graphics function method of Comsol® in this paper. A pore scale flow simulation was conducted on the reconstructed model established by two different ways using creeping flow interface and Brinkman equation interface, respectively. The results showed that the simulation of the two models agreed well in the distribution of velocity, pressure, Reynolds number, and so on. And it verified the feasibility of the direct reconstruction method from graphic file to geometric model, which provided a new way for diversifying the numerical study of micro seepage mechanism.

  10. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  11. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  12. HIGH RESOLUTION LANDCOVER MODELLING WITH PLÉIADES IMAGERY AND DEM DATA IN SUPPORT OF FINE SCALE LANDSCAPE THERMAL MODELLING

    Directory of Open Access Journals (Sweden)

    M. Thompson

    2017-11-01

    Full Text Available In the evaluation of air-borne thermal infrared imaging sensors, the use of simulated spectral infrared scenery is a cost-effective way to provide input to the sensor. The benefit of simulated scenes includes control over parameters governing the spectral and related thermal behaviour of the terrain as well as atmospheric conditions. Such scenes need to have a high degree of radiometric and geometric accuracy, as well as high resolution to account for small objects having different spectral and associated thermal properties. In support of this, innovative use of tri-stereo, ultra-high resolution Pléiades satellite imagery is being used to generated high detail, small scale quantitative terrain surface data to compliment comparable optical data in order to produce detailed urban and rural landscape datasets representative of different landscape features, within which spectrally defined characteristics can be subsequently matched to thermal signatures. Pléiades tri-stereo mode, acquired from the same orbit during the same pass, is particularly favourable for reaching the required metric accuracy because images are radiometrically and geometrically very homogeneous, which allows a very good radiometric matching for relief computation. The tri-stereo approach reduces noise and allows significantly enhanced relief description in landscapes where simple stereo imaging cannot see features, such as in dense urban areas or valley bottoms in steep, mountainous areas. This paper describes the datasets that have been generated for DENEL over the Hartebeespoort Dam region, west of Pretoria, South Africa. The final terrain datasets are generated by integrated modelling of both height and spectral surface characteristics within an object-based modelling environment. This approach provides an operational framework for rapid and highly accurate mapping of building and vegetation structure of wide areas, as is required in support of the evaluation of thermal

  13. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  14. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  15. Artificial neural network model to distinguish follicular adenoma from follicular carcinoma on fine needle aspiration of thyroid.

    Science.gov (United States)

    Savala, Rajiv; Dey, Pranab; Gupta, Nalini

    2018-03-01

    To distinguish follicular adenoma (FA) and follicular carcinoma (FC) of thyroid in fine needle aspiration cytology (FNAC) is a challenging problem. In this article, we attempted to build an artificial neural network (ANN) model from the cytological and morphometric features of the FNAC smears of thyroid to distinguish FA from FC. The cytological features and morphometric analysis were done on the FNAC smears of histology proven cases of FA (26) and FC (31). The cytological features were analysed semi-quantitatively by two independent observers (RS and PD). These data were used to make an ANN model to differentiate FA versus FC on FNAC material. The performance of this ANN model was assessed by analysing the confusion matrix and receiving operator curve. There were 39 cases in training set, 9 cases each in validation and test sets. In the test group, ANN model successfully distinguished all cases (9/9) of FA and FC. The area under receiver operating curve was 1. The present ANN model is efficient to diagnose follicular adenoma and carcinoma cases on cytology smears without any error. In future, this ANN model will be able to diagnose follicular adenoma and carcinoma cases on thyroid aspirate. This study has immense potential in future. This is an open ended ANN model and more parameters and more cases can be included to make the model much stronger. © 2017 Wiley Periodicals, Inc.

  16. The performance of fine-grained and coarse-grained elastic network models and its dependence on various factors.

    Science.gov (United States)

    Na, Hyuntae; Song, Guang

    2015-07-01

    In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.

  17. [Modeling and Simulation of Spectral Polarimetric BRDF].

    Science.gov (United States)

    Ling, Jin-jiang; Li, Gang; Zhang, Ren-bin; Tang, Qian; Ye, Qiu

    2016-01-01

    Under the conditions of the polarized light, The reflective surface of the object is affected by many factors, refractive index, surface roughness, and so the angle of incidence. For the rough surface in the different wavelengths of light exhibit different reflection characteristics of polarization, a spectral polarimetric BRDF based on Kirchhof theory is proposee. The spectral model of complex refraction index is combined with refraction index and extinction coefficient spectral model which were got by using the known complex refraction index at different value. Then get the spectral model of surface roughness derived from the classical surface roughness measuring method combined with the Fresnel reflection function. Take the spectral model of refraction index and roughness into the BRDF model, then the spectral polarimetirc BRDF model is proposed. Compare the simulation results of the refractive index varies with wavelength, roughness is constant, the refraction index and roughness both vary with wavelength and origin model with other papers, it shows that, the spectral polarimetric BRDF model can show the polarization characteristics of the surface accurately, and can provide a reliable basis for the application of polarization remote sensing, and other aspects of the classification of substances.

  18. Tokamak Simulation Code modeling of NSTX

    International Nuclear Information System (INIS)

    Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.

    2000-01-01

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption

  19. Simulations, evaluations and models. Vol. 1

    International Nuclear Information System (INIS)

    Brehmer, B.; Leplat, J.

    1992-01-01

    Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)

  20. Process model simulations of the divergence effect

    Science.gov (United States)

    Anchukaitis, K. J.; Evans, M. N.; D'Arrigo, R. D.; Smerdon, J. E.; Hughes, M. K.; Kaplan, A.; Vaganov, E. A.

    2007-12-01

    We explore the extent to which the Vaganov-Shashkin (VS) model of conifer tree-ring formation can explain evidence for changing relationships between climate and tree growth over recent decades. The VS model is driven by daily environmental forcing (temperature, soil moisture, and solar radiation), and simulates tree-ring growth cell-by-cell as a function of the most limiting environmental control. This simplified representation of tree physiology allows us to examine using a selection of case studies whether instances of divergence may be explained in terms of changes in limiting environmental dependencies or transient climate change. Identification of model-data differences permits further exploration of the effects of tree-ring standardization, atmospheric composition, and additional non-climatic factors.

  1. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  2. Traffic flow dynamics data, models and simulation

    CERN Document Server

    Treiber, Martin

    2013-01-01

    This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on ...

  3. Biomechanics trends in modeling and simulation

    CERN Document Server

    Ogden, Ray

    2017-01-01

    The book presents a state-of-the-art overview of biomechanical and mechanobiological modeling and simulation of soft biological tissues. Seven well-known scientists working in that particular field discuss topics such as biomolecules, networks and cells as well as failure, multi-scale, agent-based, bio-chemo-mechanical and finite element models appropriate for computational analysis. Applications include arteries, the heart, vascular stents and valve implants as well as adipose, brain, collagenous and engineered tissues. The mechanics of the whole cell and sub-cellular components as well as the extracellular matrix structure and mechanotransduction are described. In particular, the formation and remodeling of stress fibers, cytoskeletal contractility, cell adhesion and the mechanical regulation of fibroblast migration in healing myocardial infarcts are discussed. The essential ingredients of continuum mechanics are provided. Constitutive models of fiber-reinforced materials with an emphasis on arterial walls ...

  4. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  5. Multiscale pore structure and constitutive models of fine-grained rocks

    Science.gov (United States)

    Heath, J. E.; Dewers, T. A.; Shields, E. A.; Yoon, H.; Milliken, K. L.

    2017-12-01

    A foundational concept of continuum poromechanics is the representative elementary volume or REV: an amount of material large enough that pore- or grain-scale fluctuations in relevant properties are dissipated to a definable mean, but smaller than length scales of heterogeneity. We determine 2D-equivalent representative elementary areas (REAs) of pore areal fraction of three major types of mudrocks by applying multi-beam scanning electron microscopy (mSEM) to obtain terapixel image mosaics. Image analysis obtains pore areal fraction and pore size and shape as a function of progressively larger measurement areas. Using backscattering imaging and mSEM data, pores are identified by the components within which they occur, such as in organics or the clastic matrix. We correlate pore areal fraction with nano-indentation, micropillar compression, and axysimmetic testing at multiple length scales on a terrigenous-argillaceous mudrock sample. The combined data set is used to: investigate representative elementary volumes (and areas for the 2D images); determine if scale separation occurs; and determine if transport and mechanical properties at a given length scale can be statistically defined. Clear scale separation occurs between REAs and observable heterogeneity in two of the samples. A highly-laminated sample exhibits fine-scale heterogeneity and an overlapping in scales, in which case typical continuum assumptions on statistical variability may break down. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  6. Traffic flow dynamics. Data, models and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Treiber, Martin [Technische Univ. Dresden (Germany). Inst. fuer Wirtschaft und Verkehr; Kesting, Arne [TomTom Development Germany GmbH, Berlin (Germany)

    2013-07-01

    First comprehensive textbook of this fascinating interdisciplinary topic which explains advances in a way that it is easily accessible to engineering, physics and math students. Presents practical applications of traffic theory such as driving behavior, stability analysis, stop-and-go waves, and travel time estimation. Presents the topic in a novel and systematic way by addressing both microscopic and macroscopic models with a focus on traffic instabilities. Revised and extended edition of the German textbook ''Verkehrsdynamik und -simulation''. This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on traffic instabilities and model calibration/validation present these topics in a novel and systematic way. Finally, the theoretical framework is shown at work in selected applications such as traffic-state and travel-time estimation, intelligent transportation systems, traffic operations management, and a detailed physics-based model for fuel consumption and emissions.

  7. Simulation of water movement and isoproturon behaviour in a heavy clay soil using the MACRO model

    Directory of Open Access Journals (Sweden)

    T. J. Besien

    1997-01-01

    Full Text Available In this paper, the dual-porosity MACRO model has been used to investigate methods of reducing leaching of isoproturon from a structured heavy clay soil. The MACRO model was applied to a pesticide leaching data-set generated from a plot scale experiment on a heavy clay soil at the Oxford University Farm, Wytham, England. The field drain was found to be the most important outflow from the plot in terms of pesticide removal. Therefore, this modelling exercise concentrated on simulating field drain flow. With calibration of field-saturated and micropore saturated hydraulic conductivity, the drain flow hydrographs were simulated during extended periods of above average rainfall, with both the hydrograph shape and peak flows agreeing well. Over the whole field season, the observed drain flow water budget was well simulated. However, the first and second drain flow events after pesticide application were not simulated satisfactorily. This is believed to be due to a poor simulation of evapotranspiration during a period of low rainfall around the pesticide application day. Apart from an initial rapid drop in the observed isoproturon soil residue, the model simulated isoproturon residues during the 100 days after pesticide application reasonably well. Finally, the calibrated model was used to show that changes in agricultural practice (deep ploughing, creating fine consolidated seed beds and organic matter applications could potentially reduce pesticide leaching to surface waters by up to 60%.

  8. Investigation of modeling and simulation on a PWR power conversion system with RELAP5

    International Nuclear Information System (INIS)

    Rui Gao; Yang Yanhua; Lin Meng; Yuan Minghao; Xie Zhengrui

    2007-01-01

    Based on the power conversion system of nuclear and conventional islands of Dayabay nuclear power station, this paper models the thermal-hydraulic systems for PWR by using the best-estimate program, RELAP5. To simulate the full-scope power conversion system, not only the reactor coolant system (RCP) of nuclear island, but also the main steam system (VVP), turbine steam and drain system (GPV), bypass system (GCT), feedwater system (FW), condensate extraction system (CEX), moisture separator reheater system (GSS), turbine-driven feedwater pump (APP), low-pressure and high-pressure feedwater heater systems (ABP and AHP) of conventional island are considered and modeled. A comparison between the simulated results and the actual data of reactor under full-power demonstrates a fine match for Dayabay, and also manifests the feasibility in simulating full-scope power conversion system of PWR with RELAP5. (author)

  9. Modeling fine-scale geological heterogeneity-examples of sand lenses in tills

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Comunian, Alessandro; Oriani, Fabio

    2013-01-01

    that hamper subsequent simulation. Transition probability (TP) and multiple-point statistics (MPS) were employed to simulate sand lens heterogeneity. We used one cross-section to parameterize the spatial correlation and a second, parallel section as a reference: it allowed testing the quality......Sand lenses at various spatial scales are recognized to add heterogeneity to glacial sediments. They have high hydraulic conductivities relative to the surrounding till matrix and may affect the advective transport of water and contaminants in clayey till settings. Sand lenses were investigated...... on till outcrops producing binary images of geological cross-sections capturing the size, shape and distribution of individual features. Sand lenses occur as elongated, anisotropic geobodies that vary in size and extent. Besides, sand lenses show strong non-stationary patterns on section images...

  10. Modeling and simulation of storm surge on Staten Island to understand inundation mitigation strategies

    Science.gov (United States)

    Kress, Michael E.; Benimoff, Alan I.; Fritz, William J.; Thatcher, Cindy A.; Blanton, Brian O.; Dzedzits, Eugene

    2016-01-01

    Hurricane Sandy made landfall on October 29, 2012, near Brigantine, New Jersey, and had a transformative impact on Staten Island and the New York Metropolitan area. Of the 43 New York City fatalities, 23 occurred on Staten Island. The borough, with a population of approximately 500,000, experienced some of the most devastating impacts of the storm. Since Hurricane Sandy, protective dunes have been constructed on the southeast shore of Staten Island. ADCIRC+SWAN model simulations run on The City University of New York's Cray XE6M, housed at the College of Staten Island, using updated topographic data show that the coast of Staten Island is still susceptible to tidal surge similar to those generated by Hurricane Sandy. Sandy hindcast simulations of storm surges focusing on Staten Island are in good agreement with observed storm tide measurements. Model results calculated from fine-scaled and coarse-scaled computational grids demonstrate that finer grids better resolve small differences in the topography of critical hydraulic control structures, which affect storm surge inundation levels. The storm surge simulations, based on post-storm topography obtained from high-resolution lidar, provide much-needed information to understand Staten Island's changing vulnerability to storm surge inundation. The results of fine-scale storm surge simulations can be used to inform efforts to improve resiliency to future storms. For example, protective barriers contain planned gaps in the dunes to provide for beach access that may inadvertently increase the vulnerability of the area.

  11. Simulation Model of Mobile Detection Systems

    International Nuclear Information System (INIS)

    Edmunds, T.; Faissol, D.; Yao, Y.

    2009-01-01

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains

  12. Simulation Model of Mobile Detection Systems

    Energy Technology Data Exchange (ETDEWEB)

    Edmunds, T; Faissol, D; Yao, Y

    2009-01-27

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped with 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains

  13. Collisional radiative model for Ar-O2 mixture plasma with fully relativistic fine structure cross sections

    Science.gov (United States)

    Priti, Gangwar, Reetesh Kumar; Srivastava, Rajesh

    2018-04-01

    A collisional radiative (C-R) model has been developed to diagnose the rf generated Ar-O2 (0%-5%) mixture plasma at low temperatures. Since in such plasmas the most dominant process is an electron impact excitation process, we considered several electron impact fine structure transitions in an argon atom from its ground as well as excited states. The cross-sections for these transitions have been obtained using the reliable fully relativistic distorted wave theory. Processes which account for the coupling of argon with the oxygen molecules have been further added to the model. We couple our model to the optical spectroscopic measurements reported by Jogi et al. [J. Phys. D: Appl. Phys. 47, 335206 (2014)]. The plasma parameters, viz. the electron density (ne) and the electron temperature (Te) as a function of O2 concentration have been obtained using thirteen intense emission lines out of 3p54p → 3p54s transitions observed in their spectroscopic measurements. It is found that as the content of O2 in Ar increases from 0%-5%, Te increases in the range 0.85-1.7 eV, while the electron density decreases from 2.76 × 1012-2.34 × 1011 cm-3. The Ar-3p54s (1si) fine-structure level populations at our extracted plasma parameters are found to be in very good agreement with those obtained from the measurements. Furthermore, we have estimated the individual contributions coming from the ground state, 1si manifolds and cascade contributions to the population of the radiating Ar-3p54p (2pi) states as a function of a trace amount of O2. Such information is very useful to understand the importance of various processes occurring in the plasma.

  14. CASTOR detector. Model, objectives and simulated performance

    International Nuclear Information System (INIS)

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D.; Aslanoglou, X.; Nicolis, N.; Lobanov, M.; Erine, S.; Kharlov, Y. V.; Bogolyubsky, M. Y.; Kurepin, A. B.; Chileev, K.; Wlodarczyk, Z.

    2001-01-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented

  15. Modelling and simulation of railway cable systems

    Energy Technology Data Exchange (ETDEWEB)

    Teichelmann, G.; Schaub, M.; Simeon, B. [Technische Univ. Muenchen, Garching (Germany). Zentrum Mathematik M2

    2005-12-15

    Mathematical models and numerical methods for the computation of both static equilibria and dynamic oscillations of railroad catenaries are derived and analyzed. These cable systems form a complex network of string and beam elements and lead to coupled partial differential equations in space and time where constraints and corresponding Lagrange multipliers express the interaction between carrier, contact wire, and pantograph head. For computing static equilibria, three different algorithms are presented and compared, while the dynamic case is treated by a finite element method in space, combined with stabilized time integration of the resulting differential algebraic system. Simulation examples based on reference data from industry illustrate the potential of such computational tools. (orig.)

  16. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  17. Simulation model for port shunting yards

    Science.gov (United States)

    Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.

    2016-08-01

    Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.

  18. Traffic simulation based ship collision probability modeling

    Energy Technology Data Exchange (ETDEWEB)

    Goerlandt, Floris, E-mail: floris.goerlandt@tkk.f [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland); Kujala, Pentti [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland)

    2011-01-15

    Maritime traffic poses various risks in terms of human, environmental and economic loss. In a risk analysis of ship collisions, it is important to get a reasonable estimate for the probability of such accidents and the consequences they lead to. In this paper, a method is proposed to assess the probability of vessels colliding with each other. The method is capable of determining the expected number of accidents, the locations where and the time when they are most likely to occur, while providing input for models concerned with the expected consequences. At the basis of the collision detection algorithm lays an extensive time domain micro-simulation of vessel traffic in the given area. The Monte Carlo simulation technique is applied to obtain a meaningful prediction of the relevant factors of the collision events. Data obtained through the Automatic Identification System is analyzed in detail to obtain realistic input data for the traffic simulation: traffic routes, the number of vessels on each route, the ship departure times, main dimensions and sailing speed. The results obtained by the proposed method for the studied case of the Gulf of Finland are presented, showing reasonable agreement with registered accident and near-miss data.

  19. Modeling VOC transport in simulated waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Gresham, G.L.; Peterson, E.S.; Rae, C.; Hotz, N.J.; Connolly, M.J.

    1993-06-01

    A volatile organic compound (VOC) transport model has been developed to describe unsteady-state VOC permeation and diffusion within a waste drum. Model equations account for three primary mechanisms for VOC transport from a void volume within the drum. These mechanisms are VOC permeation across a polymer boundary, VOC diffusion across an opening in a volume boundary, and VOC solubilization in a polymer boundary. A series of lab-scale experiments was performed in which the VOC concentration was measured in simulated waste drums under different conditions. A lab-scale simulated waste drum consisted of a sized-down 55-gal metal drum containing a modified rigid polyethylene drum liner. Four polyethylene bags were sealed inside a large polyethylene bag, supported by a wire cage, and placed inside the drum liner. The small bags were filled with VOC-air gas mixture and the VOC concentration was measured throughout the drum over a period of time. Test variables included the type of VOC-air gas mixtures introduced into the small bags, the small bag closure type, and the presence or absence of a variable external heat source. Model results were calculated for those trials where the VOC permeability had been measured. Permeabilities for five VOCs [methylene chloride, 1,1,2-trichloro-1,2,2-trifluoroethane (Freon-113), 1,1,1-trichloroethane, carbon tetrachloride, and trichloroethylene] were measured across a polyethylene bag. Comparison of model and experimental results of VOC concentration as a function of time indicate that model accurately accounts for significant VOC transport mechanisms in a lab-scale waste drum

  20. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  1. A theory of fine structure image models with an application to detection and classification of dementia.

    Science.gov (United States)

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  2. Modeling and visual simulation of Microalgae photobioreactor

    Science.gov (United States)

    Zhao, Ming; Hou, Dapeng; Hu, Dawei

    Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.

  3. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    Wind turbine wakes can cause 10-20% annual energy losses in wind farms, and wake turbulence can decrease the lifetime of wind turbine blades. One way of estimating these effects is the use of computational fluid dynamics (CFD) to simulate wind turbines wakes in the atmospheric boundary layer. Since...... this flow is in the high Reynolds number regime, it is mainly dictated by turbulence. As a result, the turbulence modeling in CFD dominates the wake characteristics, especially in Reynolds-averaged Navier-Stokes (RANS). The present work is dedicated to study and develop RANS-based turbulence models...... verified with a grid dependency study. With respect to the standard k-ε EVM, the k-ε- fp EVM compares better with measurements of the velocity deficit, especially in the near wake, which translates to improved power deficits of the first wind turbines in a row. When the CFD metholody is applied to a large...

  4. Molecular models and simulations of layered materials

    International Nuclear Information System (INIS)

    Kalinichev, Andrey G.; Cygan, Randall Timothy; Heinz, Hendrik; Greathouse, Jeffery A.

    2008-01-01

    The micro- to nano-sized nature of layered materials, particularly characteristic of naturally occurring clay minerals, limits our ability to fully interrogate their atomic dispositions and crystal structures. The low symmetry, multicomponent compositions, defects, and disorder phenomena of clays and related phases necessitate the use of molecular models and modern simulation methods. Computational chemistry tools based on classical force fields and quantum-chemical methods of electronic structure calculations provide a practical approach to evaluate structure and dynamics of the materials on an atomic scale. Combined with classical energy minimization, molecular dynamics, and Monte Carlo techniques, quantum methods provide accurate models of layered materials such as clay minerals, layered double hydroxides, and clay-polymer nanocomposites

  5. A non-linear σ-model related to the fine structure of strings

    International Nuclear Information System (INIS)

    Abdalla, E.; Abdalla, M.C.B.; Lima Santos, A.

    1986-07-01

    It is shown that a σ-model related to the strings via Polyakov's construction is classically (but not quantum mechanically) integrable. When fermions are suitably introduced the exact on shell solution is discussed. In the locally supersymmetric case the 1/D expansion is used to integrate out the σ-model fields leaving an effective action for graviton and gravitino. (author)

  6. At the biological modeling and simulation frontier.

    Science.gov (United States)

    Hunt, C Anthony; Ropella, Glen E P; Lam, Tai Ning; Tang, Jonathan; Kim, Sean H J; Engelberg, Jesse A; Sheikh-Bahaei, Shahab

    2009-11-01

    We provide a rationale for and describe examples of synthetic modeling and simulation (M&S) of biological systems. We explain how synthetic methods are distinct from familiar inductive methods. Synthetic M&S is a means to better understand the mechanisms that generate normal and disease-related phenomena observed in research, and how compounds of interest interact with them to alter phenomena. An objective is to build better, working hypotheses of plausible mechanisms. A synthetic model is an extant hypothesis: execution produces an observable mechanism and phenomena. Mobile objects representing compounds carry information enabling components to distinguish between them and react accordingly when different compounds are studied simultaneously. We argue that the familiar inductive approaches contribute to the general inefficiencies being experienced by pharmaceutical R&D, and that use of synthetic approaches accelerates and improves R&D decision-making and thus the drug development process. A reason is that synthetic models encourage and facilitate abductive scientific reasoning, a primary means of knowledge creation and creative cognition. When synthetic models are executed, we observe different aspects of knowledge in action from different perspectives. These models can be tuned to reflect differences in experimental conditions and individuals, making translational research more concrete while moving us closer to personalized medicine.

  7. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  8. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  9. Stabilising the global greenhouse. A simulation model

    International Nuclear Information System (INIS)

    Michaelis, P.

    1993-01-01

    This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecolocially determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically specified and adapted to a dynamic Gams-algorithm. By various simulation runs for the period of 1990 to 2110, the economics of greenhouse gas accumulation are explored. In particular, the long-run cost associated with the above stabilisation target are evaluated for three different policy scenarios: i) A comprehensive approach that covers all major greenhouse gases simultaneously, ii) a piecemeal approach that is limited to reducing CO 2 emissions, and iii) a ten-year moratorium that postpones abatement effort until new scientific evidence on the greenhouse effect will become available. Comparing the simulation results suggests that a piecemeal approach would considerably increase total cost, whereas a ten-year moratorium might be reasonable even if the probability of 'good news' is comparatively small. (orig.)

  10. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  11. Simulation as a vehicle for enhancing collaborative practice models.

    Science.gov (United States)

    Jeffries, Pamela R; McNelis, Angela M; Wheeler, Corinne A

    2008-12-01

    Clinical simulation used in a collaborative practice approach is a powerful tool to prepare health care providers for shared responsibility for patient care. Clinical simulations are being used increasingly in professional curricula to prepare providers for quality practice. Little is known, however, about how these simulations can be used to foster collaborative practice across disciplines. This article provides an overview of what simulation is, what collaborative practice models are, and how to set up a model using simulations. An example of a collaborative practice model is presented, and nursing implications of using a collaborative practice model in simulations are discussed.

  12. Modeling and numerical simulations of the influenced Sznajd model

    Science.gov (United States)

    Karan, Farshad Salimi Naneh; Srinivasan, Aravinda Ramakrishnan; Chakraborty, Subhadeep

    2017-08-01

    This paper investigates the effects of independent nonconformists or influencers on the behavioral dynamic of a population of agents interacting with each other based on the Sznajd model. The system is modeled on a complete graph using the master equation. The acquired equation has been numerically solved. Accuracy of the mathematical model and its corresponding assumptions have been validated by numerical simulations. Regions of initial magnetization have been found from where the system converges to one of two unique steady-state PDFs, depending on the distribution of influencers. The scaling property and entropy of the stationary system in presence of varying level of influence have been presented and discussed.

  13. Hidden Fine Tuning In The Quark Sector Of Little Higgs Models

    CERN Document Server

    Grinstein, Benjamin; Uttayarat, Patipan

    2010-01-01

    In Little Higgs models a collective symmetry prevents the higgs from acquiring a quadratically divergent mass at one loop. We have previously shown that the couplings in the Littlest Higgs model introduced to give the top quark a mass do not naturally respect the collective symmetry. We extend our previous work showing that the problem is generic: it arises from the fact that the would be collective symmetry of any one top quark mass term is broken by gauge interactions.

  14. Dispersion modeling by kinematic simulation: Cloud dispersion model

    International Nuclear Information System (INIS)

    Fung, J C H; Perkins, R J

    2008-01-01

    A new technique has been developed to compute mean and fluctuating concentrations in complex turbulent flows (tidal current near a coast and deep ocean). An initial distribution of material is discretized into any small clouds which are advected by a combination of the mean flow and large scale turbulence. The turbulence can be simulated either by kinematic simulation (KS) or direct numerical simulation. The clouds also diffuse relative to their centroids; the statistics for this are obtained from a separate calculation of the growth of individual clouds in small scale turbulence, generated by KS. The ensemble of discrete clouds is periodically re-discretized, to limit the size of the small clouds and prevent overlapping. The model is illustrated with simulations of dispersion in uniform flow, and the results are compared with analytic, steady state solutions. The aim of this study is to understand how pollutants disperses in a turbulent flow through a numerical simulation of fluid particle motion in a random flow field generated by Fourier modes. Although this homogeneous turbulent is rather a 'simple' flow, it represents a building block toward understanding pollutant dispersion in more complex flow. The results presented here are preliminary in nature, but we expect that similar qualitative results should be observed in a genuine turbulent flow.

  15. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  16. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  17. Fine resolution atmospheric sulfate model driven by operational meteorological data: Comparison with observations

    International Nuclear Information System (INIS)

    Benkovitz, C.M.; Schwartz, S.E.; Berkowitz, C.M.; Easter, R.C.

    1993-09-01

    The hypothesis that anthropogenic sulfur aerosol influences clear-sky and cloud albedo and can thus influence climate has been advanced by several investigators; current global-average climate forcing is estimated to be of comparable magnitude, but opposite sign, to longwave forcing by anthropogenic greenhouse gases. The high space and time variability of sulfate concentrations and column aerosol burdens have been established by observational data; however, geographic and time coverage provided by data from surface monitoring networks is very limited. Consistent regional and global estimates of sulfate aerosol loading, and the contributions to this loading from different sources can be obtained only by modeling studies. Here we describe a sub-hemispheric to global-scale Eulerian transport and transformation model for atmospheric sulfate and its precursors, driven by operational meteorological data, and report results of calculations for October, 1986 for the North Atlantic and adjacent continental regions. The model, which is based on the Global Chemistry Model uses meteorological data from the 6-hour forecast model of the European Center for Medium-Range Weather Forecast to calculate transport and transformation of sulfur emissions. Time- and location-dependent dry deposition velocities were estimated using the methodology of Wesely and colleagues. Chemical reactions includes gaseous oxidation of SO 2 and DMS by OH, and aqueous oxidation of SO 2 by H 2 O 2 and O 3 . Anthropogenic emissions were from the NAPAP and EMEP 1985 inventories and biogenic emissions based on Bates et al. Calculated sulfate concentrations and column burdens exhibit high variability on spatial scale of hundreds of km and temporal scale of days. Calculated daily average sulfate concentrations closely reproduce observed concentrations at locations widespread over the model domain

  18. Tecnomatix Plant Simulation modeling and programming by means of examples

    CERN Document Server

    Bangsow, Steffen

    2015-01-01

    This book systematically introduces the development of simulation models as well as the implementation and evaluation of simulation experiments with Tecnomatix Plant Simulation. It deals with all users of Plant Simulation, who have more complex tasks to handle. It also looks for an easy entry into the program. Particular attention has been paid to introduce the simulation flow language SimTalk and its use in various areas of the simulation. The author demonstrates with over 200 examples how to combine the blocks for simulation models and how to deal with SimTalk for complex control and analys

  19. Nonlinear distortion in wireless systems modeling and simulation with Matlab

    CERN Document Server

    Gharaibeh, Khaled M

    2011-01-01

    This book covers the principles of modeling and simulation of nonlinear distortion in wireless communication systems with MATLAB simulations and techniques In this book, the author describes the principles of modeling and simulation of nonlinear distortion in single and multichannel wireless communication systems using both deterministic and stochastic signals. Models and simulation methods of nonlinear amplifiers explain in detail how to analyze and evaluate the performance of data communication links under nonlinear amplification. The book addresses the analysis of nonlinear systems

  20. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  1. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  2. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  3. A multi-scale homogenization model for fine-grained porous viscoplastic polycrystals: II - Applications to FCC and HCP materials

    Science.gov (United States)

    Song, Dawei; Ponte Castañeda, P.

    2018-06-01

    In Part I of this work (Song and Ponte Castañeda, 2018a), a new homogenization model was developed for the macroscopic behavior of three-scale porous polycrystals consisting of random distributions of large pores in a fine-grained polycrystalline matrix. In this second part, the model is used to investigate both the instantaneous effective behavior and the finite-strain macroscopic response of porous FCC and HCP polycrystals for axisymmetric loading conditions. The stress triaxiality and Lode parameter are found to have significant effects on the evolution of the substructure, which in turn have important implications for the overall hardening/softening behavior of the porous polycrystal. The intrinsic effect of the texture evolution of the polycrystalline matrix is inferred by appropriate comparisons with corresponding results for porous isotropic materials, and found to be significant, especially at low triaxialities. In particular, the predictions of the model identify, for the first time, two disparate regimes for the macroscopic response of porous polycrystals: a porosity-controlled regime at high triaxialities, and a texture-controlled regime at low triaxialities. The transition between these two regimes is found to be quite sharp, taking place between triaxialities of 1 and 2.

  4. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  5. Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models

    Science.gov (United States)

    2009-01-01

    his engaging phonology and cognitive science classes. I also had the pleasure and privilege of collaborating with, and learning from, Chris Callison...semantic constraints models can be viewed as instances? What possible benefits this might have? A few stylistic remarks: 1. Throughout the...although I make no cognitive or neuroscientific claims in this dissertation. Two “classic” views on linguistic representations in the brain are

  6. Fine element (F.E.) modelling of hydrogen migration and blister formation in PHWR coolant channels

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Sinha, R.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    The formation of a cold spot in pressure tube due to its contact with calandria tube of PHWR coolant results in the migration of Hydrogen in pressure tube towards contact zone from its surrounding material. A 3-D finite element code SPARSH is developed to model the hydrogen redistribution and consequent hydride blister formation due to thermal and Hydrogen concentration gradients. In the present paper, the details and performance of this code are presented. (author). 6 refs., 2 figs

  7. Modeling the fine fragmentation following the triggering stage of a vapor explosion

    International Nuclear Information System (INIS)

    Darbord, I.

    1997-01-01

    In the frame of PWR severe accidents, where the core melt, this thesis studies one of the stages of an FCI (fuel coolant interaction) or vapor explosion. An FCI is a rapid evaporation of a coolant when it comes into contact with a hot liquid. More precisely, the subject of this study is the triggering stage of the FCI, when a fuel drop of diameter around one centimeter breaks up into many fragments, diameter of which is around a hundred micrometers. The model describes the cyclic collapse and growth of a vapor bubble around the fuel droplet and its fragmentation. The main features of the model are: - the destabilization of the film or the vapor bubble due to the growth of Rayleigh-Taylor instabilities (those form coolant jets that contact the fuel surface); - The mechanisms of fragmentation, following the contacts (in the case of entrapment of a certain amount of coolant in the fuel, the entrapped coolant evaporates violently after it has been heated to the homogeneous nucleation temperature); - the transient heat transfer from the fragments to the coolant and the elevated vapor production, which leads to an important expansion of the bubble (about this point, the cooling of the fragments has been described by a transient heat transfer coefficient linked to nucleate boiling). The results of the model show good agreement with experimental data. (Author)

  8. Theoretical modeling of fine-particle deposition in 3-dimensional bronchial bifurcations

    International Nuclear Information System (INIS)

    Shaw, D.T.; Rajendran, N.; Liao, N.S.

    1978-01-01

    A theoretical model is developed for the prediction of the peak to average particle deposition flux in the human bronchial airways. The model involves the determination of the peak flux by a round-nose 2-dimensional bifurcation channel and the average deposition flux by a curved-tube model. The ''hot-spot'' effect for all generations in the human respiratory system is estimated. Hot spots are usually associated with the sites of bronchoconstriction or even chronic bronchitis and lung cancer. Recent studies indicate that lung cancer in smokers may be caused by the deposition of radioactive particles produced by the burning of tobacco leaves. High local concentrations of Po-210 have been measured in epithelium from bronchial bifurcations of smokes. This Po-210 is the radioactive daughter of Pb-210 which is produced from a long chain of radioactive decay starting from uranium in the fertilizer-enriched soil. It is found that the peak deposition flux is higher than the average deposition flux by a factor ranging between 5 and 30, depending on the generation number. The importance of this peak to average deposition flux ratio on consideration of environmental safety studies is discussed

  9. Fine-Resolution Modeling of the Santa Cruz and San Pedro River Basins for Climate Change and Riparian System Studies

    Science.gov (United States)

    Robles-Morua, A.; Vivoni, E. R.; Volo, T. J.; Rivera, E. R.; Dominguez, F.; Meixner, T.

    2011-12-01

    This project is part of a multidisciplinary effort aimed at understanding the impacts of climate variability and change on the ecological services provided by riparian ecosystems in semiarid watersheds of the southwestern United States. Valuing the environmental and recreational services provided by these ecosystems in the future requires a numerical simulation approach to estimate streamflow in ungauged tributaries as well as diffuse and direct recharge to groundwater basins. In this work, we utilize a distributed hydrologic model known as the TIN-based Real-time Integrated Basin Simulator (tRIBS) in the upper Santa Cruz and San Pedro basins with the goal of generating simulated hydrological fields that will be coupled to a riparian groundwater model. With the distributed model, we will evaluate a set of climate change and population scenarios to quantify future conditions in these two river systems and their impacts on flood peaks, recharge events and low flows. Here, we present a model confidence building exercise based on high performance computing (HPC) runs of the tRIBS model in both basins during the period of 1990-2000. Distributed model simulations utilize best-available data across the US-Mexico border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. Meteorological forcing over the historical period is obtained from a combination of sparse ground networks and weather radar rainfall estimates. We then focus on a comparison between simulation runs using ground-based forcing to cases where the Weather Research Forecast (WRF) model is used to specify the historical conditions. Two spatial resolutions are considered from the WRF model fields - a coarse (35-km) and a downscaled (10- km) forcing. Comparisons will focus on the distribution of precipitation, soil moisture, runoff generation and recharge and assess the value of the WRF coarse and downscaled products. These results provide confidence in

  10. Identification of fine scale and landscape scale drivers of urban aboveground carbon stocks using high-resolution modeling and mapping.

    Science.gov (United States)

    Mitchell, Matthew G E; Johansen, Kasper; Maron, Martine; McAlpine, Clive A; Wu, Dan; Rhodes, Jonathan R

    2018-05-01

    Urban areas are sources of land use change and CO 2 emissions that contribute to global climate change. Despite this, assessments of urban vegetation carbon stocks often fail to identify important landscape-scale drivers of variation in urban carbon, especially the potential effects of landscape structure variables at different spatial scales. We combined field measurements with Light Detection And Ranging (LiDAR) data to build high-resolution models of woody plant aboveground carbon across the urban portion of Brisbane, Australia, and then identified landscape scale drivers of these carbon stocks. First, we used LiDAR data to quantify the extent and vertical structure of vegetation across the city at high resolution (5×5m). Next, we paired this data with aboveground carbon measurements at 219 sites to create boosted regression tree models and map aboveground carbon across the city. We then used these maps to determine how spatial variation in land cover/land use and landscape structure affects these carbon stocks. Foliage densities above 5m height, tree canopy height, and the presence of ground openings had the strongest relationships with aboveground carbon. Using these fine-scale relationships, we estimate that 2.2±0.4 TgC are stored aboveground in the urban portion of Brisbane, with mean densities of 32.6±5.8MgCha -1 calculated across the entire urban land area, and 110.9±19.7MgCha -1 calculated within treed areas. Predicted carbon densities within treed areas showed strong positive relationships with the proportion of surrounding tree cover and how clumped that tree cover was at both 1km 2 and 1ha resolutions. Our models predict that even dense urban areas with low tree cover can have high carbon densities at fine scales. We conclude that actions and policies aimed at increasing urban carbon should focus on those areas where urban tree cover is most fragmented. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales

    Science.gov (United States)

    Sadler, J.; Walthall, C. L.

    2014-12-01

    The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.

  12. Developing Cognitive Models for Social Simulation from Survey Data

    Science.gov (United States)

    Alt, Jonathan K.; Lieberman, Stephen

    The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.

  13. A Geostationary Earth Orbit Satellite Model Using Easy Java Simulation

    Science.gov (United States)

    Wee, Loo Kang; Goh, Giam Hwee

    2013-01-01

    We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic…

  14. A new simulation model for assessing aircraft emergency evacuation considering passenger physical characteristics

    International Nuclear Information System (INIS)

    Liu, Yu; Wang, Weijie; Huang, Hong-Zhong; Li, Yanfeng; Yang, Yuanjian

    2014-01-01

    Conducting a real aircraft evacuation trial is oftentimes unaffordable as it is extremely expensive and may cause severe injury to participants. Simulation models as an alternative have been used to overcome the aforementioned issues in recent years. This paper proposes a new simulation model for emergency evacuation of civil aircraft. Its unique features and advantages over the existing models are twofold: (1) passengers' critical physical characteristics, e.g. waist size, gender, age, and disabilities, which impact the movement and egress time of individual evacuee from a statistical viewpoint, are taken into account in the new model. (2) Improvements are made to enhance the accuracy of the simulation model from three aspects. First, the staggered mesh discretization method together with the agent-based approach is utilized to simulate movements of individual passengers in an emergency evacuation process. Second, each node discretized to represent cabin space in the new model can contain more than one passenger if they are moving in the same direction. Finally, each individual passenger is able to change his/her evacuation route in a real-time manner based upon the distance from the current position to the target exit and the queue length. The effectiveness of the proposed simulation model is demonstrated on Boeing 767-300 aircraft. - Highlights: • A new simulation model of aircraft emergency evacuation is developed. • Some critical physical characteristics of passengers', e.g. waist size, gender, age, and disabilities, are taken into account in the new model. • An agent-based approach along with a multi-level fine network representation is used. • Passengers are able to change their evacuation routes in a real-time manner based upon distance and length of queue

  15. Four Models of In Situ Simulation

    DEFF Research Database (Denmark)

    Musaeus, Peter; Krogh, Kristian; Paltved, Charlotte

    2014-01-01

    Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest that there are f......Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest...... that there are four fruitful approaches to in situ simulation: (1) In situ simulation informed by reported critical incidents and adverse events from emergency departments (ED) in which team training is about to be conducted to write scenarios. (2) In situ simulation through ethnographic studies at the ED. (3) Using...... the following processes: Transition processes, Action processes and Interpersonal processes. Design and purpose This abstract suggests four approaches to in situ simulation. A pilot study will evaluate the different approaches in two emergency departments in the Central Region of Denmark. Methods The typology...

  16. Resource selection models are useful in predicting fine-scale distributions of black-footed ferrets in prairie dog colonies

    Science.gov (United States)

    Eads, David A.; Jachowski, David S.; Biggins, Dean E.; Livieri, Travis M.; Matchett, Marc R.; Millspaugh, Joshua J.

    2012-01-01

    Wildlife-habitat relationships are often conceptualized as resource selection functions (RSFs)—models increasingly used to estimate species distributions and prioritize habitat conservation. We evaluated the predictive capabilities of 2 black-footed ferret (Mustela nigripes) RSFs developed on a 452-ha colony of black-tailed prairie dogs (Cynomys ludovicianus) in the Conata Basin, South Dakota. We used the RSFs to project the relative probability of occurrence of ferrets throughout an adjacent 227-ha colony. We evaluated performance of the RSFs using ferret space use data collected via postbreeding spotlight surveys June–October 2005–2006. In home ranges and core areas, ferrets selected the predicted "very high" and "high" occurrence categories of both RSFs. Count metrics also suggested selection of these categories; for each model in each year, approximately 81% of ferret locations occurred in areas of very high or high predicted occurrence. These results suggest usefulness of the RSFs in estimating the distribution of ferrets throughout a black-tailed prairie dog colony. The RSFs provide a fine-scale habitat assessment for ferrets that can be used to prioritize releases of ferrets and habitat restoration for prairie dogs and ferrets. A method to quickly inventory the distribution of prairie dog burrow openings would greatly facilitate application of the RSFs.

  17. Modeling and simulation of the SDC data collection chip

    International Nuclear Information System (INIS)

    Hughes, E.; Haney, M.; Golin, E.; Jones, L.; Knapp, D.; Tharakan, G.; Downing, R.

    1992-01-01

    This paper describes modeling and simulation of the Data Collection Chip (DCC) design for the Solenoidal Detector Collaboration (SDC). Models of the DCC written in Verilog and VHDL are described, and results are presented. The models have been simulated to study queue depth requirements and to compare control feedback alternatives. Insight into the management of models and simulation tools is given. Finally, techniques useful in the design process for data acquisition systems are discussed

  18. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-01-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method

  19. Modelling toolkit for simulation of maglev devices

    Science.gov (United States)

    Peña-Roche, J.; Badía-Majós, A.

    2017-01-01

    A stand-alone App1 has been developed, focused on obtaining information about relevant engineering properties of magnetic levitation systems. Our modelling toolkit provides real time simulations of 2D magneto-mechanical quantities for superconductor (SC)/permanent magnet structures. The source code is open and may be customised for a variety of configurations. Ultimately, it relies on the variational statement of the critical state model for the superconducting component and has been verified against experimental data for YBaCuO/NdFeB assemblies. On a quantitative basis, the values of the arising forces, induced superconducting currents, as well as a plot of the magnetic field lines are displayed upon selection of an arbitrary trajectory of the magnet in the vicinity of the SC. The stability issues related to the cooling process, as well as the maximum attainable forces for a given material and geometry are immediately observed. Due to the complexity of the problem, a strategy based on cluster computing, database compression, and real-time post-processing on the device has been implemented.

  20. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  1. Fine-resolution Modeling of Urban-Energy Systems' Water Footprint in River Networks

    Science.gov (United States)

    McManamay, R.; Surendran Nair, S.; Morton, A.; DeRolph, C.; Stewart, R.

    2015-12-01

    Characterizing the interplay between urbanization, energy production, and water resources is essential for ensuring sustainable population growth. In order to balance limited water supplies, competing users must account for their realized and virtual water footprint, i.e. the total direct and indirect amount of water used, respectively. Unfortunately, publicly reported US water use estimates are spatially coarse, temporally static, and completely ignore returns of water to rivers after use. These estimates are insufficient to account for the high spatial and temporal heterogeneity of water budgets in urbanizing systems. Likewise, urbanizing areas are supported by competing sources of energy production, which also have heterogeneous water footprints. Hence, a fundamental challenge of planning for sustainable urban growth and decision-making across disparate policy sectors lies in characterizing inter-dependencies among urban systems, energy producers, and water resources. A modeling framework is presented that provides a novel approach to integrate urban-energy infrastructure into a spatial accounting network that accurately measures water footprints as changes in the quantity and quality of river flows. River networks (RNs), i.e. networks of branching tributaries nested within larger rivers, provide a spatial structure to measure water budgets by modeling hydrology and accounting for use and returns from urbanizing areas and energy producers. We quantify urban-energy water footprints for Atlanta, GA and Knoxville, TN (USA) based on changes in hydrology in RNs. Although water intakes providing supply to metropolitan areas were proximate to metropolitan areas, power plants contributing to energy demand in Knoxville and Atlanta, occurred 30 and 90km outside the metropolitan boundary, respectively. Direct water footprints from urban landcover primarily comprised smaller streams whereas indirect footprints from water supply reservoirs and energy producers included

  2. Modelization and simulation of capillary barriers

    International Nuclear Information System (INIS)

    Lisbona Cortes, F.; Aguilar Villa, G.; Clavero Gracia, C.; Gracia Lozano, J.L.

    1998-01-01

    Among the different underground transport phenomena, that due to water flows is of great relevance. Water flows in infiltration and percolation processes are responsible of the transport of hazardous wastes towards phreatic layers. From the industrial and geological standpoints, there is a great interest in the design of natural devices to avoid the flows transporting polluting substances. This interest is increased when devices are used to isolate radioactive waste repositories, whose life is to be longer than several hundred years. The so-called natural devices are those based on the superimposition of material with different hydraulic properties. In particular, the flow retention in this kind stratified media, in unsaturated conditions, is basically due to the capillary barrier effect, resulting from placing a low conductivity material over another with a high hydraulic conductivity. Covers designed from the effect above have also to allow a drainage of the upper layer. The lower cost of these covers, with respect to other kinds of protection systems, and the stability in time of their components make them very attractive. However, a previous investigation to determine their effectivity is required. In this report we present the computer code BCSIM, useful for easy simulations of unsaturated flows in a capillary barrier configuration with drainage, and which is intended to serve as a tool for designing efficient covers. The model, the numerical algorithm and several implementation aspects are described. Results obtained in several simulations, confirming the effectivity of capillary barriers as a technique to build safety covers for hazardous waste repositories, are presented. (Author)

  3. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  4. An electrical circuit model for simulation of indoor radon concentration.

    Science.gov (United States)

    Musavi Nasab, S M; Negarestani, A

    2013-01-01

    In this study, a new model based on electric circuit theory was introduced to simulate the behaviour of indoor radon concentration. In this model, a voltage source simulates radon generation in walls, conductivity simulates migration through walls and voltage across a capacitor simulates radon concentration in a room. This simulation considers migration of radon through walls by diffusion mechanism in one-dimensional geometry. Data reported in a typical Greek house were employed to examine the application of this technique of simulation to the behaviour of radon.

  5. Aircraft vulnerability analysis by modeling and simulation

    Science.gov (United States)

    Willers, Cornelius J.; Willers, Maria S.; de Waal, Alta

    2014-10-01

    guidance acceleration and seeker sensitivity. For the purpose of this investigation the aircraft is equipped with conventional pyrotechnic decoy flares and the missile has no counter-countermeasure means (security restrictions on open publication). This complete simulation is used to calculate the missile miss distance, when the missile is launched from different locations around the aircraft. The miss distance data is then graphically presented showing miss distance (aircraft vulnerability) as a function of launch direction and range. The aircraft vulnerability graph accounts for aircraft and missile characteristics, but does not account for missile deployment doctrine. A Bayesian network is constructed to fuse the doctrinal rules with the aircraft vulnerability data. The Bayesian network now provides the capability to evaluate the combined risk of missile launch and aircraft vulnerability. It is shown in this paper that it is indeed possible to predict the aircraft vulnerability to missile attack in a comprehensive modelling and a holistic process. By using the appropriate real-world models, this approach is used to evaluate the effectiveness of specific countermeasure techniques against specific missile threats. The use of a Bayesian network provides the means to fuse simulated performance data with more abstract doctrinal rules to provide a realistic assessment of the aircraft vulnerability.

  6. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  7. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  8. Guidelines for Reproducibly Building and Simulating Systems Biology Models.

    Science.gov (United States)

    Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R

    2016-10-01

    Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.

  9. Re-evaluation of the Pressure Effect for Nucleation in Laminar Flow Diffusion Chamber Experiments with Fluent and the Fine Particle Model

    Czech Academy of Sciences Publication Activity Database

    Herrmann, E.; Hyvärinen, A.-P.; Brus, David; Lihavainen, H.; Kulmala, M.

    2009-01-01

    Roč. 113, č. 8 (2009), s. 1434-1439 ISSN 1089-5639 Institutional research plan: CEZ:AV0Z40720504 Keywords : laminar flow diffusion chamber * experimental data * fine particle model Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.899, year: 2009

  10. Fine and Gross Motor Task Performance When Using Computer-Based Video Models by Students with Autism and Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Swindle, Catherine O.

    2013-01-01

    This investigation examined the effects of video modeling on the fine and gross motor task performance by three students with a diagnosis of moderate intellectual disability (Group 1) and by three students with a diagnosis of autism spectrum disorder (Group 2). Using a multiple probe design across three sets of tasks, the study examined the…

  11. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    Directory of Open Access Journals (Sweden)

    Peter C Lai

    2012-05-01

    Full Text Available Olfactory receptors (ORs are a type of GTP-binding protein-coupled receptor (GPCR. These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can validate experimental functional studies as well as generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level. Here we have shown the specific advantages of simulating the dynamic environment that is associated with OR-odorant interactions. We present a rigorous methodology that ranges from the creation of a computationally-derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.

  12. Photoionization Modeling of Infrared Fine-Structure Lines in Luminous Galaxies with Central Dust-Bounded Nebulae

    National Research Council Canada - National Science Library

    Fischer, Jacqueline; Allen, Robert; Dudley, C. C; Satyapal, Shobita; Luhman, Michael L; Wolfire, Mark G; Smith, Howard A

    2001-01-01

    Far-infrared spectroscopy of a small sample of IR-bright galaxies taken with the Infrared Space Observatory Long Wavelength Spectrometer has revealed a dramatic progression extending from strong fine...

  13. A multi-scale homogenization model for fine-grained porous viscoplastic polycrystals: I - Finite-strain theory

    Science.gov (United States)

    Song, Dawei; Ponte Castañeda, P.

    2018-06-01

    We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.

  14. Fine mapping of the Bsr1 barley stripe mosaic virus resistance gene in the model grass Brachypodium distachyon.

    Directory of Open Access Journals (Sweden)

    Yu Cui

    Full Text Available The ND18 strain of Barley stripe mosaic virus (BSMV infects several lines of Brachypodium distachyon, a recently developed model system for genomics research in cereals. Among the inbred lines tested, Bd3-1 is highly resistant at 20 to 25 °C, whereas Bd21 is susceptible and infection results in an intense mosaic phenotype accompanied by high levels of replicating virus. We generated an F(6:7 recombinant inbred line (RIL population from a cross between Bd3-1 and Bd21 and used the RILs, and an F(2 population of a second Bd21 × Bd3-1 cross to evaluate the inheritance of resistance. The results indicate that resistance segregates as expected for a single dominant gene, which we have designated Barley stripe mosaic virus resistance 1 (Bsr1. We constructed a genetic linkage map of the RIL population using SNP markers to map this gene to within 705 Kb of the distal end of the top of chromosome 3. Additional CAPS and Indel markers were used to fine map Bsr1 to a 23 Kb interval containing five putative genes. Our study demonstrates the power of using RILs to rapidly map the genetic determinants of BSMV resistance in Brachypodium. Moreover, the RILs and their associated genetic map, when combined with the complete genomic sequence of Brachypodium, provide new resources for genetic analyses of many other traits.

  15. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  16. Diversity modelling for electrical power system simulation

    International Nuclear Information System (INIS)

    Sharip, R M; Abu Zarim, M A U A

    2013-01-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios

  17. Diversity modelling for electrical power system simulation

    Science.gov (United States)

    Sharip, R. M.; Abu Zarim, M. A. U. A.

    2013-12-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios.

  18. Optical Imaging and Radiometric Modeling and Simulation

    Science.gov (United States)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge

  19. Modeling ground-based timber harvesting systems using computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2001-01-01

    Modeling ground-based timber harvesting systems with an object-oriented methodology was investigated. Object-oriented modeling and design promote a better understanding of requirements, cleaner designs, and better maintainability of the harvesting simulation system. The model developed simulates chainsaw felling, drive-to-tree feller-buncher, swing-to-tree single-grip...

  20. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  1. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, A.; Jauch, Clemens; Soerensen, P.

    The present report describes the dynamic wind turbine models implemented in the power system simulation tool DIgSILENT. The developed models are a part of the results of a national research project, whose overall objective is to create a model database in different simulation tools. The report...

  2. A New Model for Simulating TSS Washoff in Urban Areas

    Directory of Open Access Journals (Sweden)

    E. Crobeddu

    2011-01-01

    Full Text Available This paper presents the formulation and validation of the conceptual Runoff Quality Simulation Model (RQSM that was developed to simulate the erosion and transport of solid particles in urban areas. The RQSM assumes that solid particle accumulation on pervious and impervious areas is infinite. The RQSM simulates soil erosion using rainfall kinetic energy and solid particle transport with linear system theory. A sensitivity analysis was conducted on the RQSM to show the influence of each parameter on the simulated load. Total suspended solid (TSS loads monitored at the outlet of the borough of Verdun in Canada and at three catchment outlets of the City of Champaign in the United States were used to validate the RQSM. TSS loads simulated by the RQSM were compared to measured loads and to loads simulated by the Rating Curve model and the Exponential model of the SWMM software. The simulation performance of the RQSM was comparable to the Exponential and Rating Curve models.

  3. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  4. Mathematical model and simulations of radiation fluxes from buried radionuclides

    International Nuclear Information System (INIS)

    Ahmad Saat

    1999-01-01

    A mathematical model and a simple Monte Carlo simulations were developed to predict radiation fluxes from buried radionuclides. The model and simulations were applied to measured (experimental) data. The results of the mathematical model showed good acceptable order of magnitude agreement. A good agreement was also obtained between the simple simulations and the experimental results. Thus, knowing the radionuclide distribution profiles in soil from a core sample, it can be applied to the model or simulations to estimate the radiation fluxes emerging from the soil surface. (author)

  5. Quantum Link Models and Quantum Simulation of Gauge Theories

    International Nuclear Information System (INIS)

    Wiese, U.J.

    2015-01-01

    This lecture is about Quantum Link Models and Quantum Simulation of Gauge Theories. The lecture consists out of 4 parts. The first part gives a brief history of Computing and Pioneers of Quantum Computing and Quantum Simulations of Quantum Spin Systems are introduced. The 2nd lecture is about High-Temperature Superconductors versus QCD, Wilson’s Lattice QCD and Abelian Quantum Link Models. The 3rd lecture deals with Quantum Simulators for Abelian Lattice Gauge Theories and Non-Abelian Quantum Link Models. The last part of the lecture discusses Quantum Simulators mimicking ‘Nuclear’ physics and the continuum limit of D-Theorie models. (nowak)

  6. Modeling and Simulation of U-tube Steam Generator

    Science.gov (United States)

    Zhang, Mingming; Fu, Zhongguang; Li, Jinyao; Wang, Mingfei

    2018-03-01

    The U-tube natural circulation steam generator was mainly researched with modeling and simulation in this article. The research is based on simuworks system simulation software platform. By analyzing the structural characteristics and the operating principle of U-tube steam generator, there are 14 control volumes in the model, including primary side, secondary side, down channel and steam plenum, etc. The model depends completely on conservation laws, and it is applied to make some simulation tests. The results show that the model is capable of simulating properly the dynamic response of U-tube steam generator.

  7. Functional Decomposition of Modeling and Simulation Terrain Database Generation Process

    National Research Council Canada - National Science Library

    Yakich, Valerie R; Lashlee, J. D

    2008-01-01

    .... This report documents the conceptual procedure as implemented by Lockheed Martin Simulation, Training, and Support and decomposes terrain database construction using the Integration Definition for Function Modeling (IDEF...

  8. Global Information Enterprise (GIE) Modeling and Simulation (GIESIM)

    National Research Council Canada - National Science Library

    Bell, Paul

    2005-01-01

    ... AND S) toolkits into the Global Information Enterprise (GIE) Modeling and Simulation (GIESim) framework to create effective user analysis of candidate communications architectures and technologies...

  9. Modeling, Simulation and Position Control of 3DOF Articulated Manipulator

    Directory of Open Access Journals (Sweden)

    Hossein Sadegh Lafmejani

    2014-08-01

    Full Text Available In this paper, the modeling, simulation and control of 3 degrees of freedom articulated robotic manipulator have been studied. First, we extracted kinematics and dynamics equations of the mentioned manipulator by using the Lagrange method. In order to validate the analytical model of the manipulator we compared the model simulated in the simulation environment of Matlab with the model was simulated with the SimMechanics toolbox. A sample path has been designed for analyzing the tracking subject. The system has been linearized with feedback linearization and then a PID controller was applied to track a reference trajectory. Finally, the control results have been compared with a nonlinear PID controller.

  10. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model

    Science.gov (United States)

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo

    2014-01-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition. PMID:25621085

  11. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model.

    Science.gov (United States)

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P; Russell, MacKenzie R; Jones, Robert M; King, Matt; Betterton, Eric A; Sáez, A Eduardo

    2014-09-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.

  12. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  13. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and mode-rator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  14. Development of a simulation model of semi-active suspension for monorail

    Science.gov (United States)

    Hasnan, K.; Didane, D. H.; Kamarudin, M. A.; Bakhsh, Qadir; Abdulmalik, R. E.

    2016-11-01

    The new Kuala Lumpur Monorail Fleet Expansion Project (KLMFEP) uses semiactive technology in its suspension system. It is recognized that the suspension system influences the ride quality. Thus, among the way to further improve the ride quality is by fine- tuning the semi-active suspension system on the new KL Monorail. The semi-active suspension for the monorail specifically in terms of improving ride quality could be exploited further. Hence a simulation model which will act as a platform to test the design of a complete suspension system particularly to investigate the ride comfort performance is required. MSC Adams software was considered as the tool to develop the simulation platform, where all parameters and data are represented by mathematical equations; whereas the new KL Monorail being the reference model. In the simulation, the model went through step disturbance on the guideway for stability and ride comfort analysis. The model has shown positive results where the monorail is in stable condition as an outcome from stability analysis. The model also scores a Rating 1 classification in ISO 2631 Ride Comfort performance which is very comfortable as an overall outcome from ride comfort analysis. The model is also adjustable, flexibile and understandable by the engineers within the field for the purpose of further development.

  15. MODEL OF HEAT SIMULATOR FOR DATA CENTERS

    Directory of Open Access Journals (Sweden)

    Jan Novotný

    2016-08-01

    Full Text Available The aim of this paper is to present a design and a development of a heat simulator, which will be used for a flow research in data centers. The designed heat simulator is based on an ideological basis of four-processor 1U Supermicro server. The designed heat simulator enables to control the flow and heat output within the range of 10–100 %. The paper covers also the results of testing measurements of mass flow rates and heat flow rates in the simulator. The flow field at the outlet of the server was measured by the stereo PIV method. The heat flow rate was determined, based on measuring the temperature field at the inlet and outlet of the simulator and known mass flow rate.

  16. A moving subgrid model for simulation of reflood heat transfer

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Mahaffy, John H.; Hochreiter, Lawrence E.

    2003-01-01

    In the quench front and froth region the thermal-hydraulic parameters experience a sharp axial variation. The heat transfer regime changes from single-phase liquid, to nucleate boiling, to transition boiling and finally to film boiling in a small axial distance. One of the major limitations of all the current best-estimate codes is that a relatively coarse mesh is used to solve the complex fluid flow and heat transfer problem in proximity of the quench front during reflood. The use of a fine axial mesh for the entire core becomes prohibitive because of the large computational costs involved. Moreover, as the mesh size decreases, the standard numerical methods based on a semi-implicit scheme, tend to become unstable. A subgrid model was developed to resolve the complex thermal-hydraulic problem at the quench front and froth region. This model is a Fine Hydraulic Moving Grid (FHMG) that overlies a coarse Eulerian mesh in the proximity of the quench front and froth region. The fine mesh moves in the core and follows the quench front as it advances in the core while the rods cool and quench. The FHMG software package was developed and implemented into the COBRA-TF computer code. This paper presents the model and discusses preliminary results obtained with the COBRA-TF/FHMG computer code

  17. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  18. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  19. Intelligent simulation of aquatic environment economic policy coupled ABM and SD models.

    Science.gov (United States)

    Wang, Huihui; Zhang, Jiarui; Zeng, Weihua

    2018-03-15

    Rapid urbanization and population growth have resulted in serious water shortage and pollution of the aquatic environment, which are important reasons for the complex increase in environmental deterioration in the region. This study examines the environmental consequences and economic impacts of water resource shortages under variant economic policies; however, this requires complex models that jointly consider variant agents and sectors within a systems perspective. Thus, we propose a complex system model that couples multi-agent based models (ABM) and system dynamics (SD) models to simulate the impact of alternative economic policies on water use and pricing. Moreover, this model took the constraint of the local water resources carrying capacity into consideration. Results show that to achieve the 13th Five Year Plan targets in Dianchi, water prices for local residents and industries should rise to 3.23 and 4.99 CNY/m 3 , respectively. The corresponding sewage treatment fees for residents and industries should rise to 1.50 and 2.25 CNY/m 3 , respectively, assuming comprehensive adjustment of industrial structure and policy. At the same time, the local government should exercise fine-scale economic policy combined with emission fees assessed for those exceeding a standard, and collect fines imposed as punishment for enterprises that exceed emission standards. When fines reach 500,000 CNY, the total number of enterprises that exceed emission standards in the basin can be controlled within 1%. Moreover, it is suggested that the volume of water diversion in Dianchi should be appropriately reduced to 3.06×10 8 m 3 . The reduced expense of water diversion should provide funds to use for the construction of recycled water facilities. Then the local rise in the rate of use of recycled water should reach 33%, and 1.4 CNY/m 3 for the price of recycled water could be provided to ensure the sustainable utilization of local water resources. Copyright © 2017 Elsevier B

  20. Modeling and Simulation of a 12 MW Wind Farm

    Directory of Open Access Journals (Sweden)

    GROZA, V.

    2010-05-01

    Full Text Available The installation of wind turbines in power systems has developed rapidly through the last 20 years. In this paper a complete simulation model of a 6 x 2 MW wind turbines is presented using data from a wind farm installed in Denmark. A model of the wind turbine with cage-rotor induction generator is presented in details. A set of simulations are performed and they show that it is possible to simulate a complete wind farm from wind to the grid. The simulation tool can also be used to simulate bigger wind farms connected to the grid.

  1. Theory of compressive modeling and simulation

    Science.gov (United States)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  2. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  3. Modelling and simulation of containment on full scope simulator for Qinshan 300 MW Nuclear Power Unit

    International Nuclear Information System (INIS)

    Zou Tingyun

    1996-01-01

    A multi-node containment thermal-hydraulic model has been developed and adapted in Full Scope Simulator for Qinshan 300 MW Nuclear Power Unit with good realtime simulation effects. Containment pressure for LBLOCA calculated by the model is well agreed with those of CONTEMPT-4/MOD3

  4. Impact of 2000–2050 climate change on fine particulate matter (PM2.5 air quality inferred from a multi-model analysis of meteorological modes

    Directory of Open Access Journals (Sweden)

    D. J. Jacob

    2012-12-01

    Full Text Available Studies of the effect of climate change on fine particulate matter (PM2.5 air quality using general circulation models (GCMs show inconsistent results including in the sign of the effect. This reflects uncertainty in the GCM simulations of the regional meteorological variables affecting PM2.5. Here we use the CMIP3 archive of data from fifteen different IPCC AR4 GCMs to obtain improved statistics of 21st-century trends in the meteorological modes driving PM2.5 variability over the contiguous US. We analyze 1999–2010 observations to identify the dominant meteorological modes driving interannual PM2.5 variability and their synoptic periods T. We find robust correlations (r > 0.5 of annual mean PM2.5 with T, especially in the eastern US where the dominant modes represent frontal passages. The GCMs all have significant skill in reproducing present-day statistics for T and we show that this reflects their ability to simulate atmospheric baroclinicity. We then use the local PM2.5-to-period sensitivity (dPM2.5/dT from the 1999–2010 observations to project PM2.5 changes from the 2000–2050 changes in T simulated by the 15 GCMs following the SRES A1B greenhouse warming scenario. By weighted-average statistics of GCM results we project a likely 2000–2050 increase of ~ 0.1 μg m−3 in annual mean PM2.5 in the eastern US arising from less frequent frontal ventilation, and a likely decrease albeit with greater inter-GCM variability in the Pacific Northwest due to more frequent maritime inflows. Potentially larger regional effects of 2000–2050 climate change on PM2.5 may arise from changes in temperature, biogenic emissions, wildfires, and vegetation, but are still unlikely to affect annual PM2.5 by more than 0.5 μg m−3.

  5. Medical simulation: Overview, and application to wound modelling and management

    Directory of Open Access Journals (Sweden)

    Dinker R Pai

    2012-01-01

    Full Text Available Simulation in medical education is progressing in leaps and bounds. The need for simulation in medical education and training is increasing because of a overall increase in the number of medical students vis-à-vis the availability of patients; b increasing awareness among patients of their rights and consequent increase in litigations and c tremendous improvement in simulation technology which makes simulation more and more realistic. Simulation in wound care can be divided into use of simulation in wound modelling (to test the effect of projectiles on the body and simulation for training in wound management. Though this science is still in its infancy, more and more researchers are now devising both low-technology and high-technology (virtual reality simulators in this field. It is believed that simulator training will eventually translate into better wound care in real patients, though this will be the subject of further research.

  6. Medical simulation: Overview, and application to wound modelling and management.

    Science.gov (United States)

    Pai, Dinker R; Singh, Simerjit

    2012-05-01

    Simulation in medical education is progressing in leaps and bounds. The need for simulation in medical education and training is increasing because of a) overall increase in the number of medical students vis-à-vis the availability of patients; b) increasing awareness among patients of their rights and consequent increase in litigations and c) tremendous improvement in simulation technology which makes simulation more and more realistic. Simulation in wound care can be divided into use of simulation in wound modelling (to test the effect of projectiles on the body) and simulation for training in wound management. Though this science is still in its infancy, more and more researchers are now devising both low-technology and high-technology (virtual reality) simulators in this field. It is believed that simulator training will eventually translate into better wound care in real patients, though this will be the subject of further research.

  7. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  8. Vehicle Modeling for Future Generation Transportation Simulation

    Science.gov (United States)

    2009-05-10

    Recent development of inter-vehicular wireless communication technologies have motivated many innovative applications aiming at significantly increasing traffic throughput and improving highway safety. Powerful traffic simulation is an indispensable ...

  9. A eural etwork Model for Dynamics Simulation

    African Journals Online (AJOL)

    Nafiisah

    Results 5 - 18 ... situations, such as a dynamic environment (e.g., a molecular dynamics (MD) simulation whereby an atom constantly changes its local environment and number ..... of systems including both small clusters and bulk structures. 7.

  10. Induction generator models in dynamic simulation tools

    DEFF Research Database (Denmark)

    Knudsen, Hans; Akhmatov, Vladislav

    1999-01-01

    For AC network with large amount of induction generators (windmills) the paper demonstrates a significant discrepancy in the simulated voltage recovery after fault in weak networks when comparing dynamic and transient stability descriptions and the reasons of discrepancies are explained...

  11. SIMULATION TOOLS FOR ELECTRICAL MACHINES MODELLING ...

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. Simulation tools are used both for research and teaching to allow a good ... The solution provide an easy way of determining the dynamic .... incorporate an in-built numerical algorithm, ... to learn, versatile in application, enhanced.

  12. A numerical model for simulating electroosmotic micro- and nanochannel flows under non-Boltzmann equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyoungjin; Kwak, Ho Sang [School of Mechanical Engineering, Kumoh National Institute of Technology, 1 Yangho, Gumi, Gyeongbuk 730-701 (Korea, Republic of); Song, Tae-Ho, E-mail: kimkj@kumoh.ac.kr, E-mail: hskwak@kumoh.ac.kr, E-mail: thsong@kaist.ac.kr [Department of Mechanical, Aerospace and Systems Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong, Yuseong, Daejeon 305-701 (Korea, Republic of)

    2011-08-15

    This paper describes a numerical model for simulating electroosmotic flows (EOFs) under non-Boltzmann equilibrium in a micro- and nanochannel. The transport of ionic species is represented by employing the Nernst-Planck equation. Modeling issues related to numerical difficulties are discussed, which include the handling of boundary conditions based on surface charge density, the associated treatment of electric potential and the evasion of nonlinearity due to the electric body force. The EOF in the entrance region of a straight channel is examined. The numerical results show that the present model is useful for the prediction of the EOFs requiring a fine resolution of the electric double layer under either the Boltzmann equilibrium or non-equilibrium. Based on the numerical results, the correlation between the surface charge density and the zeta potential is investigated.

  13. Multiscale Modeling and Simulation of Material Processing

    Science.gov (United States)

    2006-07-01

    challenge is how to develop methods that permit simulation of a process with a fewer number of atoms (for e.g. 106 instead of 1014 atoms in a cube) or...rreula bakgrundmes to ea wih poblms n-here. In dynamic simulations, the mass and momentum volving rapidly varying stress, such as stress field near a...significant, as indicated by numerical examples that will follow. We next summarize the coupling scheme with the aid of flowchart Fig. 8. The material

  14. Discrete event simulation modelling of patient service management with Arena

    Science.gov (United States)

    Guseva, Elena; Varfolomeyeva, Tatyana; Efimova, Irina; Movchan, Irina

    2018-05-01

    This paper describes the simulation modeling methodology aimed to aid in solving the practical problems of the research and analysing the complex systems. The paper gives the review of a simulation platform sand example of simulation model development with Arena 15.0 (Rockwell Automation).The provided example of the simulation model for the patient service management helps to evaluate the workload of the clinic doctors, determine the number of the general practitioners, surgeons, traumatologists and other specialized doctors required for the patient service and develop recommendations to ensure timely delivery of medical care and improve the efficiency of the clinic operation.

  15. Stochastic models to simulate paratuberculosis in dairy herds

    DEFF Research Database (Denmark)

    Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad

    2011-01-01

    Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...

  16. A Simulation Model Articulation of the REA Ontology

    Science.gov (United States)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  17. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  18. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of...

  19. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of processes,...

  20. User's Manual for the Simulating Waves Nearshore Model (SWAN)

    National Research Council Canada - National Science Library

    Allard, Richard

    2002-01-01

    The Simulating WAves Nearshore (SWAN) model is a numerical wave model used to obtain realistic estimates of wave parameters in coastal areas, lakes, and estuaries from given wind, bottom, and current conditions...

  1. Modeling and Simulation of Energy Recovery from a Photovoltaic ...

    African Journals Online (AJOL)

    Modeling and Simulation of Energy Recovery from a Photovoltaic Solar cell. ... Photovoltaic (PV) solar cell which converts solar energy directly into electrical energy is one of ... model of the solar panel which could represent the real systems.

  2. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  3. Simulation modeling for quality and productivity in steel cord manufacturing

    OpenAIRE

    Türkseven, Can Hulusi; Turkseven, Can Hulusi; Ertek, Gürdal; Ertek, Gurdal

    2003-01-01

    We describe the application of simulation modeling to estimate and improve quality and productivity performance of a steel cord manufacturing system. We describe the typical steel cord manufacturing plant, emphasize its distinguishing characteristics, identify various production settings and discuss applicability of simulation as a management decision support tool. Besides presenting the general structure of the developed simulation model, we focus on wire fractures, which can be an important...

  4. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    Hydrostatic Models Still Capable of Simulating Oceanic Fronts Yalin Fan Zhitao Yu Ocean Dynamics and Prediction Branch Oceanography Division FengYan Shi...OF PAGES 17. LIMITATION OF ABSTRACT Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts? Yalin Fan, Zhitao Yu, and, Fengyan Shi1 Naval...mixed layer and thermocline simulations as well as large scale circulations. Numerical experiments are conducted using hydrostatic (HY) and

  5. A Fresh Cadaver Model for the Instruction of Ultrasound-Guided Fine-Needle Aspiration of Thyroid Nodules.

    Science.gov (United States)

    McCrary, Hilary C; Faucett, Erynne A; Hurbon, Audriana N; Milinic, Tijana; Cervantes, Jose A; Kent, Sean L; Adamas-Rappaport, William J

    2017-07-01

    Objective The aim of our study is to determine if a fresh cadaver model (FCM) for the instruction of ultrasound (US)-guided fine-needle aspiration (FNA) of thyroid nodules is a practical method for instruction. Study Design Pre- and postinstruction assessment of medical students' ability to perform US-guided FNA of artificially created thyroid nodules placed adjacent to the thyroid gland of a fresh cadaver. Setting University-based fresh cadaver laboratory. Subjects and Methods Study participants included a total of 17 first- and second-year medical students with minimal US training. Technical skills were assessed using a 10-item checklist. In addition, a cognitive assessment regarding the indications, contraindications, and complications of the procedure was completed. A postinstruction assessment was provided for participants 5 weeks after their initial assessment. Differences between pre- and postinstruction assessment scores of technical skills were analyzed using McNemar's test. The mean cognitive knowledge gain was analyzed using a paired 2-sample t test. Results Eight of 10 items on the skills checklist were statistically significant between pre- and postinstruction skills assessment ( P < .05). There was a statistically significant change in cognitive knowledge gain regarding the contraindications of the procedure ( P = .001), but not for indications or complications ( P = .104 and P = .111, respectively). Conclusion US-guided FNA continues to be an important diagnostic procedure in the workup of thyroid nodules, making it an essential skill to integrate into surgical skills lab. Our FCM for the instruction of US-guided FNA is the first of its kind, and this pilot study shows this is a viable method for instruction.

  6. D-amphetamine improves cognitive deficits and physical therapy promotes fine motor rehabilitation in a rat embolic stroke model

    DEFF Research Database (Denmark)

    Rasmussen, Rune Skovgaard; Overgaard, K; Hildebrandt-Eriksen, E S

    2006-01-01

    regarding gross motor performance. CONCLUSIONS: After embolization, physical therapy improved fine motor performance and D-amph accelerated rehabilitation of cognitive performance as observed in the rats of the THERAPY and D-AMPH groups. As a result of the administration of a high dose of D-amph, the rats......BACKGROUND AND PURPOSE: The purpose of this study was to examine the effects of D-amphetamine (D-amph) and physical therapy separately or combined on fine motor performance, gross motor performance and cognition after middle cerebral artery thromboembolization in rats. METHODS: Seventy-four rats...... on days 21-28 after surgery, rats of the SHAM and THERAPY groups had better fine motor performance than those of the CONTROL (P cognitive performance than CONTROL rats (P

  7. A modeling study of the nonlinear response of fine particles to air pollutant emissions in the Beijing–Tianjin–Hebei region

    Directory of Open Access Journals (Sweden)

    B. Zhao

    2017-10-01

    Full Text Available The Beijing–Tianjin–Hebei (BTH region has been suffering from the most severe fine-particle (PM2. 5 pollution in China, which causes serious health damage and economic loss. Quantifying the source contributions to PM2. 5 concentrations has been a challenging task because of the complicated nonlinear relationships between PM2. 5 concentrations and emissions of multiple pollutants from multiple spatial regions and economic sectors. In this study, we use the extended response surface modeling (ERSM technique to investigate the nonlinear response of PM2. 5 concentrations to emissions of multiple pollutants from different regions and sectors over the BTH region, based on over 1000 simulations by a chemical transport model (CTM. The ERSM-predicted PM2. 5 concentrations agree well with independent CTM simulations, with correlation coefficients larger than 0.99 and mean normalized errors less than 1 %. Using the ERSM technique, we find that, among all air pollutants, primary inorganic PM2. 5 makes the largest contribution (24–36 % to PM2. 5 concentrations. The contribution of primary inorganic PM2. 5 emissions is especially high in heavily polluted winter and is dominated by the industry as well as residential and commercial sectors, which should be prioritized in PM2. 5 control strategies. The total contributions of all precursors (nitrogen oxides, NOx; sulfur dioxides, SO2; ammonia, NH3; non-methane volatile organic compounds, NMVOCs; intermediate-volatility organic compounds, IVOCs; primary organic aerosol, POA to PM2. 5 concentrations range between 31 and 48 %. Among these precursors, PM2. 5 concentrations are primarily sensitive to the emissions of NH3, NMVOC + IVOC, and POA. The sensitivities increase substantially for NH3 and NOx and decrease slightly for POA and NMVOC + IVOC with the increase in the emission reduction ratio, which illustrates the nonlinear relationships between precursor emissions and PM

  8. A modeling study of the nonlinear response of fine particles to air pollutant emissions in the Beijing-Tianjin-Hebei region

    Science.gov (United States)

    Zhao, Bin; Wu, Wenjing; Wang, Shuxiao; Xing, Jia; Chang, Xing; Liou, Kuo-Nan; Jiang, Jonathan H.; Gu, Yu; Jang, Carey; Fu, Joshua S.; Zhu, Yun; Wang, Jiandong; Lin, Yan; Hao, Jiming

    2017-10-01

    The Beijing-Tianjin-Hebei (BTH) region has been suffering from the most severe fine-particle (PM2. 5) pollution in China, which causes serious health damage and economic loss. Quantifying the source contributions to PM2. 5 concentrations has been a challenging task because of the complicated nonlinear relationships between PM2. 5 concentrations and emissions of multiple pollutants from multiple spatial regions and economic sectors. In this study, we use the extended response surface modeling (ERSM) technique to investigate the nonlinear response of PM2. 5 concentrations to emissions of multiple pollutants from different regions and sectors over the BTH region, based on over 1000 simulations by a chemical transport model (CTM). The ERSM-predicted PM2. 5 concentrations agree well with independent CTM simulations, with correlation coefficients larger than 0.99 and mean normalized errors less than 1 %. Using the ERSM technique, we find that, among all air pollutants, primary inorganic PM2. 5 makes the largest contribution (24-36 %) to PM2. 5 concentrations. The contribution of primary inorganic PM2. 5 emissions is especially high in heavily polluted winter and is dominated by the industry as well as residential and commercial sectors, which should be prioritized in PM2. 5 control strategies. The total contributions of all precursors (nitrogen oxides, NOx; sulfur dioxides, SO2; ammonia, NH3; non-methane volatile organic compounds, NMVOCs; intermediate-volatility organic compounds, IVOCs; primary organic aerosol, POA) to PM2. 5 concentrations range between 31 and 48 %. Among these precursors, PM2. 5 concentrations are primarily sensitive to the emissions of NH3, NMVOC + IVOC, and POA. The sensitivities increase substantially for NH3 and NOx and decrease slightly for POA and NMVOC + IVOC with the increase in the emission reduction ratio, which illustrates the nonlinear relationships between precursor emissions and PM2. 5 concentrations. The contributions of primary

  9. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  10. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    A computer simulation model is a detailed working hypothesis about a given system. The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general pragmatic approach to model building is discussed; techniques are ...

  11. Mesoscale meteorological model based on radioactive explosion cloud simulation

    International Nuclear Information System (INIS)

    Zheng Yi; Zhang Yan; Ying Chuntong

    2008-01-01

    In order to simulate nuclear explosion and dirty bomb radioactive cloud movement and concentration distribution, mesoscale meteorological model RAMS was used. Particles-size, size-active distribution and gravitational fallout in the cloud were considered. The results show that the model can simulate the 'mushroom' clouds of explosion. Three-dimension fluid field and radioactive concentration field were received. (authors)

  12. Exploiting Modelling and Simulation in Support of Cyber Defence

    NARCIS (Netherlands)

    Klaver, M.H.A.; Boltjes, B.; Croom-Jonson, S.; Jonat, F.; Çankaya, Y.

    2014-01-01

    The rapidly evolving environment of Cyber threats against the NATO Alliance has necessitated a renewed focus on the development of Cyber Defence policy and capabilities. The NATO Modelling and Simulation Group is looking for ways to leverage Modelling and Simulation experience in research, analysis

  13. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  14. Simulation-based modeling of building complexes construction management

    Science.gov (United States)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  15. New Simulation Models for Addressing Like X–Aircraft Responses ...

    African Journals Online (AJOL)

    New Simulation Models for Addressing Like X–Aircraft Responses. AS Mohammed, SO Abdulkareem. Abstract. The original Monte Carlo model was previously modified for use in simulating data that conform to certain resource flow constraints. Recent encounters in communication and controls render these data absolute ...

  16. Analyzing Interaction Patterns to Verify a Simulation/Game Model

    Science.gov (United States)

    Myers, Rodney Dean

    2012-01-01

    In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and…

  17. Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2002-01-01

    A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the x-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom

  18. A Simulation and Modeling Framework for Space Situational Awareness

    International Nuclear Information System (INIS)

    Olivier, S.S.

    2008-01-01

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated

  19. MOVES (MOTOR VEHICLE EMISSION SIMULATOR) MODEL ...

    Science.gov (United States)

    A computer model, intended to eventually replace the MOBILE model and to incorporate the NONROAD model, that will provide the ability to estimate criteria and toxic air pollutant emission factors and emission inventories that are specific to the areas and time periods of interest, at scales ranging from local to national. Development of a new emission factor and inventory model for mobile source emissions. The model will be used by air pollution modelers within EPA, and at the State and local levels.

  20. Modeling and Simulation of Cyber Battlefield

    Directory of Open Access Journals (Sweden)

    AliJabar Rashidi

    2017-12-01

    Full Text Available In order to protect cyberspace against cyber-attacks we need cyber situation awareness framework for the implementation of our cyber maneuvers. This article allows execution cyber maneuvers with dynamic cyber battlefield simulator. Cyber battlefield contains essential information for the detection of cyber events, therefore, it can be considered most important and complicated factor in the high-level fusion. Cyber battlefield by gather detail data of cyberspace elements, including knowledge repository of vulnerability, tangible and intangible elements of cyberspace and the relationships between them, can provide and execute cyber maneuvers, penetration testing, cyber-attacks injection, attack tracking, visualization, cyber-attacks impact assessment and risk assessment. The dynamic maker Engine in simulator is designed to update the knowledge base of vulnerabilities, change the topology elements, and change the access list, services, hosts and users. Evaluation of simulator do with qualitative method of research and with create a focus group.

  1. Modeling and simulation of pressurized water reactor power plant

    International Nuclear Information System (INIS)

    Wang, S.J.

    1983-01-01

    Two kinds of balance of plant (BOP) models of a pressurized water reactor (PWR) system are developed in this work - the detailed BOP model and the simple BOP model. The detailed model is used to simulate the normal operational performance of a whole BOP system. The simple model is used to combine with the NSSS model for a whole plant simulation. The trends of the steady state values of the detailed model are correct and the dynamic responses are reasonable. The simple BOP model approach starts the modelling work from the overall point of view. The response of the normalized turbine power and the feedwater inlet temperature to the steam generator of the simple model are compared with those of the detailed model. Both the steady state values and the dynamic responses are close to those of the detailed model. The simple BOP model is found adequate to represent the main performance of the BOP system. The simple balance of plant model was coupled with a NSSS model for a whole plant simulation. The NSSS model consists of the reactor core model, the steam generator model, and the coolant temperature control system. A closed loop whole plant simulation for an electric load perturbation was performed. The results are plausible. The coupling effect between the NSSS system and the BOP system was analyzed. The feedback of the BOP system has little effect on the steam generator performance, while the performance of the BOP system is strongly affected by the steam flow rate from the NSSS

  2. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  3. Simulating faults and plate boundaries with a transversely isotropic plasticity model

    Science.gov (United States)

    Sharples, W.; Moresi, L. N.; Velic, M.; Jadamec, M. A.; May, D. A.

    2016-03-01

    In mantle convection simulations, dynamically evolving plate boundaries have, for the most part, been represented using an visco-plastic flow law. These systems develop fine-scale, localized, weak shear band structures which are reminiscent of faults but it is a significant challenge to resolve the large- and the emergent, small-scale-behavior. We address this issue of resolution by taking into account the observation that a rock element with embedded, planar, failure surfaces responds as a non-linear, transversely isotropic material with a weak orientation defined by the plane of the failure surface. This approach partly accounts for the large-scale behavior of fine-scale systems of shear bands which we are not in a position to resolve explicitly. We evaluate the capacity of this continuum approach to model plate boundaries, specifically in the context of subduction models where the plate boundary interface has often been represented as a planar discontinuity. We show that the inclusion of the transversely isotropic plasticity model for the plate boundary promotes asymmetric subduction from initiation. A realistic evolution of the plate boundary interface and associated stresses is crucial to understanding inter-plate coupling, convergent margin driven topography, and earthquakes.

  4. The invaluable benefits of modeling and simulation in our lives

    International Nuclear Information System (INIS)

    Lorencez, C.

    2015-01-01

    'Full text:' In general terms, we associate the words 'modeling and simulation' with semi-ideal mathematical models reproducing complex Engineering problems. However, the use of modeling and simulation is much more extensive than that: it is applied on a daily basis in almost every front of Science, from sociology and biology to climate change, medicine, robotics, war strategies, etc. It is also being applied by our frontal lobe when we make decisions. The results of these exercises on modeling and simulation have had invaluable benefits on our well being, and we are just at the beginning. (author)

  5. Optical modeling and simulation of thin-film photovoltaic devices

    CERN Document Server

    Krc, Janez

    2013-01-01

    In wafer-based and thin-film photovoltaic (PV) devices, the management of light is a crucial aspect of optimization since trapping sunlight in active parts of PV devices is essential for efficient energy conversions. Optical modeling and simulation enable efficient analysis and optimization of the optical situation in optoelectronic and PV devices. Optical Modeling and Simulation of Thin-Film Photovoltaic Devices provides readers with a thorough guide to performing optical modeling and simulations of thin-film solar cells and PV modules. It offers insight on examples of existing optical models

  6. The invaluable benefits of modeling and simulation in our lives

    Energy Technology Data Exchange (ETDEWEB)

    Lorencez, C., E-mail: carlos.lorencez@opg.com [Ontario Power Generation, Nuclear Safety Div., Pickering, Ontario (Canada)

    2015-07-01

    'Full text:' In general terms, we associate the words 'modeling and simulation' with semi-ideal mathematical models reproducing complex Engineering problems. However, the use of modeling and simulation is much more extensive than that: it is applied on a daily basis in almost every front of Science, from sociology and biology to climate change, medicine, robotics, war strategies, etc. It is also being applied by our frontal lobe when we make decisions. The results of these exercises on modeling and simulation have had invaluable benefits on our well being, and we are just at the beginning. (author)

  7. Relative importance of secondary settling tank models in WWTP simulations

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores-Alsina, Xavier; Sin, Gürkan

    2012-01-01

    Results obtained in a study using the Benchmark Simulation Model No. 1 (BSM1) show that a one-dimensional secondary settling tank (1-D SST) model structure and its parameters are among the most significant sources of uncertainty in wastewater treatment plant (WWTP) simulations [Ramin et al., 2011......]. The sensitivity results consistently indicate that the prediction of sludge production is most sensitive to the variation of the settling parameters. In the present study, we use the Benchmark Simulation Model No. 2 (BSM2), a plant-wide benchmark, that combines the Activated Sludge Model No. 1 (ASM1...

  8. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang

    2008-01-01

    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  9. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  10. Dynamic models of staged gasification processes. Documentation of gasification simulator; Dynamiske modeller a f trinopdelte forgasningsprocesser. Dokumentation til forgasser simulator

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-02-15

    In connection with the ERP project 'Dynamic modelling of staged gasification processes' a gasification simulator has been constructed. The simulator consists of: a mathematical model of the gasification process developed at Technical University of Denmark, a user interface programme, IGSS, and a communication interface between the two programmes. (BA)

  11. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  12. Mechanical System Simulations for Seismic Signature Modeling

    National Research Council Canada - National Science Library

    Lacombe, J

    2001-01-01

    .... Results for an M1A1 and T72 are discussed. By analyzing the simulated seismic signature data in conjunction with the spectral features associated with the vibrations of specific vehicle sprung and un-sprung components we are able to make...

  13. Spiral Growth in Plants: Models and Simulations

    Science.gov (United States)

    Allen, Bradford D.

    2004-01-01

    The analysis and simulation of spiral growth in plants integrates algebra and trigonometry in a botanical setting. When the ideas presented here are used in a mathematics classroom/computer lab, students can better understand how basic assumptions about plant growth lead to the golden ratio and how the use of circular functions leads to accurate…

  14. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  15. A View on Future Building System Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2011-04-01

    This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described by coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).

  16. Mars Exploration Rover Terminal Descent Mission Modeling and Simulation

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.

    2004-01-01

    Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.

  17. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  18. Calibration of the simulation model of the VINCY cyclotron magnet

    Directory of Open Access Journals (Sweden)

    Ćirković Saša

    2002-01-01

    Full Text Available The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximumobtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.

  19. Modeling and simulation of Indus-2 RF feedback control system

    International Nuclear Information System (INIS)

    Sharma, D.; Bagduwal, P.S.; Tiwari, N.; Lad, M.; Hannurkar, P.R.

    2012-01-01

    Indus-2 synchrotron radiation source has four RF stations along with their feedback control systems. For higher beam energy and current operation amplitude and phase feedback control systems of Indus-2 are being upgraded. To understand the behaviour of amplitude and phase control loop under different operating conditions, modelling and simulation of RF feedback control system is done. RF cavity baseband I/Q model has been created due to its close correspondence with actual implementation and better computational efficiency which makes the simulation faster. Correspondence between cavity baseband and RF model is confirmed by comparing their simulation results. Low Level RF (LLRF) feedback control system simulation is done using the same cavity baseband I/Q model. Error signals are intentionally generated and response of the closed loop system is observed. Simulation will help us in optimizing parameters of upgraded LLRF system for higher beam energy and current operation. (author)

  20. Modelling of thermalhydraulics and reactor physics in simulators

    International Nuclear Information System (INIS)

    Miettinen, J.

    1994-01-01

    The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)