WorldWideScience

Sample records for models simulate fine

  1. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  2. Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates

    Science.gov (United States)

    Walton, Otis R.; Johnson, Scott M.

    2010-01-01

    The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging

  3. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  4. QUIESCENT PROMINENCES IN THE ERA OF ALMA: SIMULATED OBSERVATIONS USING THE 3D WHOLE-PROMINENCE FINE STRUCTURE MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Gunár, Stanislav; Heinzel, Petr [Astronomical Institute, The Czech Academy of Sciences, 25165 Ondřejov (Czech Republic); Mackay, Duncan H. [School of Mathematics and Statistics, University of St Andrews, North Haugh, St Andrews KY16 9SS (United Kingdom); Anzer, Ulrich [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85740 Garching bei München (Germany)

    2016-12-20

    We use the detailed 3D whole-prominence fine structure model to produce the first simulated high-resolution ALMA observations of a modeled quiescent solar prominence. The maps of synthetic brightness temperature and optical thickness shown in the present paper are produced using a visualization method for synthesis of the submillimeter/millimeter radio continua. We have obtained the simulated observations of both the prominence at the limb and the filament on the disk at wavelengths covering a broad range that encompasses the full potential of ALMA. We demonstrate here extent to which the small-scale and large-scale prominence and filament structures will be visible in the ALMA observations spanning both the optically thin and thick regimes. We analyze the relationship between the brightness and kinetic temperature of the prominence plasma. We also illustrate the opportunities ALMA will provide for studying the thermal structure of the prominence plasma from the cores of the cool prominence fine structure to the prominence–corona transition region. In addition, we show that detailed 3D modeling of entire prominences with their numerous fine structures will be important for the correct interpretation of future ALMA observations of prominences.

  5. Numerical modelling of hydro-morphological processes dominated by fine suspended sediment in a stormwater pond

    Science.gov (United States)

    Guan, Mingfu; Ahilan, Sangaralingam; Yu, Dapeng; Peng, Yong; Wright, Nigel

    2018-01-01

    Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality.

  6. Dynamic Simulation of Random Packing of Polydispersive Fine Particles

    Science.gov (United States)

    Ferraz, Carlos Handrey Araujo; Marques, Samuel Apolinário

    2018-02-01

    In this paper, we perform molecular dynamic (MD) simulations to study the two-dimensional packing process of both monosized and random size particles with radii ranging from 1.0 to 7.0 μm. The initial positions as well as the radii of five thousand fine particles were defined inside a rectangular box by using a random number generator. Both the translational and rotational movements of each particle were considered in the simulations. In order to deal with interacting fine particles, we take into account both the contact forces and the long-range dispersive forces. We account for normal and static/sliding tangential friction forces between particles and between particle and wall by means of a linear model approach, while the long-range dispersive forces are computed by using a Lennard-Jones-like potential. The packing processes were studied assuming different long-range interaction strengths. We carry out statistical calculations of the different quantities studied such as packing density, mean coordination number, kinetic energy, and radial distribution function as the system evolves over time. We find that the long-range dispersive forces can strongly influence the packing process dynamics as they might form large particle clusters, depending on the intensity of the long-range interaction strength.

  7. A triple-scale crystal plasticity modeling and simulation on size effect due to fine-graining

    International Nuclear Information System (INIS)

    Kurosawa, Eisuke; Aoyagi, Yoshiteru; Tadano, Yuichi; Shizawa, Kazuyuki

    2010-01-01

    In this paper, a triple-scale crystal plasticity model bridging three hierarchical material structures, i.e., dislocation structure, grain aggregate and practical macroscopic structure is developed. Geometrically necessary (GN) dislocation density and GN incompatibility are employed so as to describe isolated dislocations and dislocation pairs in a grain, respectively. Then the homogenization method is introduced into the GN dislocation-crystal plasticity model for derivation of the governing equation of macroscopic structure with the mathematical and physical consistencies. Using the present model, a triple-scale FE simulation bridging the above three hierarchical structures is carried out for f.c.c. polycrystals with different mean grain size. It is shown that the present model can qualitatively reproduce size effects of macroscopic specimen with ultrafine-grain, i.e., the increase of initial yield stress, the decrease of hardening ratio after reaching tensile strength and the reduction of tensile ductility with decrease of its grain size. Moreover, the relationship between macroscopic yielding of specimen and microscopic grain yielding is discussed and the mechanism of the poor tensile ductility due to fine-graining is clarified. (author)

  8. Multiscale modeling and nested simulations of three-dimensional ionospheric plasmas: Rayleigh–Taylor turbulence and nonequilibrium layer dynamics at fine scales

    International Nuclear Information System (INIS)

    Mahalov, Alex

    2014-01-01

    Multiscale modeling and high resolution three-dimensional simulations of nonequilibrium ionospheric dynamics are major frontiers in the field of space sciences. The latest developments in fast computational algorithms and novel numerical methods have advanced reliable forecasting of ionospheric environments at fine scales. These new capabilities include improved physics-based predictive modeling, nesting and implicit relaxation techniques that are designed to integrate models of disparate scales. A range of scales, from mesoscale to ionospheric microscale, are included in a 3D modeling framework. Analyses and simulations of primary and secondary Rayleigh–Taylor instabilities in the equatorial spread F (ESF), the response of the plasma density to the neutral turbulent dynamics, and wave breaking in the lower region of the ionosphere and nonequilibrium layer dynamics at fine scales are presented for coupled systems (ions, electrons and neutral winds), thus enabling studies of mesoscale/microscale dynamics for a range of altitudes that encompass the ionospheric E and F layers. We examine the organizing mixing patterns for plasma flows, which occur due to polarized gravity wave excitations in the neutral field, using Lagrangian coherent structures (LCS). LCS objectively depict the flow topology and the extracted scintillation-producing irregularities that indicate a generation of ionospheric density gradients, due to the accumulation of plasma. The scintillation effects in propagation, through strongly inhomogeneous ionospheric media, are induced by trapping electromagnetic (EM) waves in parabolic cavities, which are created by the refractive index gradients along the propagation paths. (paper)

  9. Simulating Fine-Scale Marine Pollution Plumes for Autonomous Robotic Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Muhammad Fahad

    2018-05-01

    Full Text Available Marine plumes exhibit characteristics such as intermittency, sinuous structure, shape and flow field coherency, and a time varying concentration profile. Due to the lack of experimental quantification of these characteristics for marine plumes, existing work often assumes marine plumes exhibit behavior similar to aerial plumes and are commonly modeled by filament based Lagrangian models. Our previous field experiments with Rhodamine dye plumes at Makai Research Pier at Oahu, Hawaii revealed that marine plumes show similar characteristics to aerial plumes qualitatively, but quantitatively they are disparate. Based on the field data collected, this paper presents a calibrated Eulerian plume model that exhibits the qualitative and quantitative characteristics exhibited by experimentally generated marine plumes. We propose a modified model with an intermittent source, and implement it in a Robot Operating System (ROS based simulator. Concentration time series of stationary sampling points and dynamic sampling points across cross-sections and plume fronts are collected and analyzed for statistical parameters of the simulated plume. These parameters are then compared with statistical parameters from experimentally generated plumes. The comparison validates that the simulated plumes exhibit fine-scale qualitative and quantitative characteristics similar to experimental plumes. The ROS plume simulator facilitates future evaluations of environmental monitoring strategies by marine robots, and is made available for community use.

  10. A Modelling Approach on Fine Particle Spatial Distribution for Street Canyons in Asian Residential Community

    Science.gov (United States)

    Ling, Hong; Lung, Shih-Chun Candice; Uhrner, Ulrich

    2016-04-01

    Rapidly increasing urban pollution poses severe health risks.Especially fine particles pollution is considered to be closely related to respiratory and cardiovascular disease. In this work, ambient fine particles are studied in street canyons of a typical Asian residential community using a computational fluid dynamics (CFD) dispersion modelling approach. The community is characterised by an artery road with a busy traffic flow of about 4000 light vehicles (mainly cars and motorcycles) per hour at rush hours, three streets with hundreds light vehicles per hour at rush hours and several small lanes with less traffic. The objective is to study the spatial distribution of the ambient fine particle concentrations within micro-environments, in order to assess fine particle exposure of the people living in the community. The GRAL modelling system is used to simulate and assess the emission and dispersion of the traffic-related fine particles within the community. Traffic emission factors and traffic situation is assigned using both field observation and local emissions inventory data. High resolution digital elevation data (DEM) and building height data are used to resolve the topographical features. Air quality monitoring and mobile monitoring within the community is used to validate the simulation results. By using this modelling approach, the dispersion of fine particles in street canyons is simulated; the impact of wind condition and street orientation are investigated; the contributions of car and motorcycle emissions are quantified respectively; the residents' exposure level of fine particles is assessed. The study is funded by "Taiwan Megacity Environmental Research (II)-chemistry and environmental impacts of boundary layer aerosols (Year 2-3) (103-2111-M-001-001-); Spatial variability and organic markers of aerosols (Year 3)(104-2111-M-001 -005 -)"

  11. 3D visualization of ultra-fine ICON climate simulation data

    Science.gov (United States)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  12. Haptic rendering for simulation of fine manipulation

    CERN Document Server

    Wang, Dangxiao; Zhang, Yuru

    2014-01-01

    This book introduces the latest progress in six degrees of freedom (6-DoF) haptic rendering with the focus on a new approach for simulating force/torque feedback in performing tasks that require dexterous manipulation skills. One of the major challenges in 6-DoF haptic rendering is to resolve the conflict between high speed and high fidelity requirements, especially in simulating a tool interacting with both rigid and deformable objects in a narrow space and with fine features. The book presents a configuration-based optimization approach to tackle this challenge. Addressing a key issue in man

  13. Internal variability of fine-scale components of meteorological fields in extended-range limited-area model simulations with atmospheric and surface nudging

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei

    2015-09-01

    Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.

  14. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  15. Fine-root mortality rates in a temperate forest: Estimates using radiocarbon data and numerical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W.J.; Gaudinski, J.B.; Torn, M.S.; Joslin, J.D.; Hanson, P.J.

    2009-09-01

    We used an inadvertent whole-ecosystem {sup 14}C label at a temperate forest in Oak Ridge, Tennessee, USA to develop a model (Radix1.0) of fine-root dynamics. Radix simulates two live-root pools, two dead-root pools, non-normally distributed root mortality turnover times, a stored carbon (C) pool, and seasonal growth and respiration patterns. We applied Radix to analyze measurements from two root size classes (< 0.5 and 0.5-2.0 mm diameter) and three soil-depth increments (O horizon, 0-15 cm and 30-60 cm). Predicted live-root turnover times were < 1 yr and 10 yr for short- and long-lived pools, respectively. Dead-root pools had decomposition turnover times of 2 yr and 10 yr. Realistic characterization of C flows through fine roots requires a model with two live fine-root populations, two dead fine-root pools, and root respiration. These are the first fine-root turnover time estimates that take into account respiration, storage, seasonal growth patterns, and non-normal turnover time distributions. The presence of a root population with decadal turnover times implies a lower amount of belowground net primary production used to grow fine-root tissue than is currently predicted by models with a single annual turnover pool.

  16. Modelling coupled sedimentation and consolidation of fine slurries

    Energy Technology Data Exchange (ETDEWEB)

    Masala, S. [Klohn Crippen Berger, Calgary, AB (Canada)

    2010-07-01

    This article presented a model to simulate and successfully predict the essential elements of sedimentation and consolidation as a coupled process, bringing together separately developed models from chemistry and geology/geotechnical engineering, respectively. The derived model is for a 1-dimensional simultaneous sedimentation and consolidation of a solid-liquid suspension that uses permeability as the unifying concept for the hydrodynamic interaction between solid and liquid in a suspension. The numerical solution relies on an explicit finite difference procedure in material coordinates, and an Euler forward-marching scheme was used for advancing the solution in time. The problem of internal discontinuities was solved by way of convenient numerical solutions and Lagrangian coordinates. Java-based SECO software with a user-friendly graphical user interface (GUI) was used to implement the model, allowing the solution process to be visualized and animated. The software functionality along with GUI and programming issues were discussed at length. A fine-grained suspension data set was used to validate the model. 10 refs., 12 figs.

  17. Rosé Wine Fining Using Polyvinylpolypyrrolidone: Colorimetry, Targeted Polyphenomics, and Molecular Dynamics Simulations.

    Science.gov (United States)

    Gil, Mélodie; Avila-Salas, Fabian; Santos, Leonardo S; Iturmendi, Nerea; Moine, Virginie; Cheynier, Véronique; Saucier, Cédric

    2017-12-06

    Polyvinylpolypyrrolidone (PVPP) is a fining agent polymer used in winemaking to adjust rosé wine color and to prevent organoleptic degradations by reducing polyphenol content. The impact of this polymer on color parameters and polyphenols of rosé wines was investigated, and the binding specificity of polyphenols toward PVPP was determined. Color measured by colorimetry decreased after treatment, thus confirming the adsorption of anthocyanins and other pigments. Phenolic composition was determined before and after fining by targeted polyphenomics (Ultra Performance Liquid Chromatography (UPLC)-Electrospray Ionization(ESI)-Mass Spectrometry (MS/MS)). MS analysis showed adsorption differences among polyphenol families. Flavonols (42%) and flavanols (64%) were the most affected. Anthocyanins were not strongly adsorbed on average (12%), but a specific adsorption of coumaroylated anthocyanins was observed (37%). Intermolecular interactions were also studied using molecular dynamics simulations. Relative adsorptions of flavanols were correlated with the calculated interaction energies. The specific affinity of coumaroylated anthocyanins toward PVPP was also well explained by the molecular modeling.

  18. Transforming an educational virtual reality simulation into a work of fine art.

    Science.gov (United States)

    Panaiotis; Addison, Laura; Vergara, Víctor M; Hakamata, Takeshi; Alverson, Dale C; Saiki, Stanley M; Caudell, Thomas Preston

    2008-01-01

    This paper outlines user interface and interaction issues, technical considerations, and problems encountered in transforming an educational VR simulation of a reified kidney nephron into an interactive artwork appropriate for a fine arts museum.

  19. Quantum Big Bang without fine-tuning in a toy-model

    International Nuclear Information System (INIS)

    Znojil, Miloslav

    2012-01-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = t c . This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = t c .

  20. Quantum Big Bang without fine-tuning in a toy-model

    Science.gov (United States)

    Znojil, Miloslav

    2012-02-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = tc. This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = tc.

  1. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  2. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  3. An innovative computer design for modeling forest landscape change in very large spatial extents with fine resolutions

    Science.gov (United States)

    Jian Yang; Hong S. He; Stephen R. Shifley; Frank R. Thompson; Yangjian. Zhang

    2011-01-01

    Although forest landscape models (FLMs) have benefited greatly from ongoing advances of computer technology and software engineering, computing capacity remains a bottleneck in the design and development of FLMs. Computer memory overhead and run time efficiency are primary limiting factors when applying forest landscape models to simulate large landscapes with fine...

  4. An Efficient Upscaling Process Based on a Unified Fine-scale Multi-Physics Model for Flow Simulation in Naturally Fracture Carbonate Karst Reservoirs

    KAUST Repository

    Bi, Linfeng

    2009-01-01

    The main challenges in modeling fluid flow through naturally-fractured carbonate karst reservoirs are how to address various flow physics in complex geological architectures due to the presence of vugs and caves which are connected via fracture networks at multiple scales. In this paper, we present a unified multi-physics model that adapts to the complex flow regime through naturally-fractured carbonate karst reservoirs. This approach generalizes Stokes-Brinkman model (Popov et al. 2007). The fracture networks provide the essential connection between the caves in carbonate karst reservoirs. It is thus very important to resolve the flow in fracture network and the interaction between fractures and caves to better understand the complex flow behavior. The idea is to use Stokes-Brinkman model to represent flow through rock matrix, void caves as well as intermediate flows in very high permeability regions and to use an idea similar to discrete fracture network model to represent flow in fracture network. Consequently, various numerical solution strategies can be efficiently applied to greatly improve the computational efficiency in flow simulations. We have applied this unified multi-physics model as a fine-scale flow solver in scale-up computations. Both local and global scale-up are considered. It is found that global scale-up has much more accurate than local scale-up. Global scale-up requires the solution of global flow problems on fine grid, which generally is computationally expensive. The proposed model has the ability to deal with large number of fractures and caves, which facilitate the application of Stokes-Brinkman model in global scale-up computation. The proposed model flexibly adapts to the different flow physics in naturally-fractured carbonate karst reservoirs in a simple and effective way. It certainly extends modeling and predicting capability in efficient development of this important type of reservoir.

  5. Monte Carlo Simulations of Electron Energy-Loss Spectra with the Addition of Fine Structure from Density Functional Theory Calculations.

    Science.gov (United States)

    Attarian Shandiz, Mohammad; Guinel, Maxime J-F; Ahmadi, Majid; Gauvin, Raynald

    2016-02-01

    A new approach is presented to introduce the fine structure of core-loss excitations into the electron energy-loss spectra of ionization edges by Monte Carlo simulations based on an optical oscillator model. The optical oscillator strength is refined using the calculated electron energy-loss near-edge structure by density functional theory calculations. This approach can predict the effects of multiple scattering and thickness on the fine structure of ionization edges. In addition, effects of the fitting range for background removal and the integration range under the ionization edge on signal-to-noise ratio are investigated.

  6. Demonstrating the Uneven Importance of Fine-Scale Forest Structure on Snow Distributions using High Resolution Modeling

    Science.gov (United States)

    Broxton, P. D.; Harpold, A. A.; van Leeuwen, W.; Biederman, J. A.

    2016-12-01

    Quantifying the amount of snow in forested mountainous environments, as well as how it may change due to warming and forest disturbance, is critical given its importance for water supply and ecosystem health. Forest canopies affect snow accumulation and ablation in ways that are difficult to observe and model. Furthermore, fine-scale forest structure can accentuate or diminish the effects of forest-snow interactions. Despite decades of research demonstrating the importance of fine-scale forest structure (e.g. canopy edges and gaps) on snow, we still lack a comprehensive understanding of where and when forest structure has the largest impact on snowpack mass and energy budgets. Here, we use a hyper-resolution (1 meter spatial resolution) mass and energy balance snow model called the Snow Physics and Laser Mapping (SnowPALM) model along with LIDAR-derived forest structure to determine where spatial variability of fine-scale forest structure has the largest influence on large scale mass and energy budgets. SnowPALM was set up and calibrated at sites representing diverse climates in New Mexico, Arizona, and California. Then, we compared simulations at different model resolutions (i.e. 1, 10, and 100 m) to elucidate the effects of including versus not including information about fine scale canopy structure. These experiments were repeated for different prescribed topographies (i.e. flat, 30% slope north, and south-facing) at each site. Higher resolution simulations had more snow at lower canopy cover, with the opposite being true at high canopy cover. Furthermore, there is considerable scatter, indicating that different canopy arrangements can lead to different amounts of snow, even when the overall canopy coverage is the same. This modeling is contributing to the development of a high resolution machine learning algorithm called the Snow Water Artificial Network (SWANN) model to generate predictions of snow distributions over much larger domains, which has implications

  7. Mathematical modeling of atmospheric fine particle-associated primary organic compound concentrations

    Science.gov (United States)

    Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    1996-08-01

    An atmospheric transport model has been used to explore the relationship between source emissions and ambient air quality for individual particle phase organic compounds present in primary aerosol source emissions. An inventory of fine particulate organic compound emissions was assembled for the Los Angeles area in the year 1982. Sources characterized included noncatalyst- and catalyst-equipped autos, diesel trucks, paved road dust, tire wear, brake lining dust, meat cooking operations, industrial oil-fired boilers, roofing tar pots, natural gas combustion in residential homes, cigarette smoke, fireplaces burning oak and pine wood, and plant leaf abrasion products. These primary fine particle source emissions were supplied to a computer-based model that simulates atmospheric transport, dispersion, and dry deposition based on the time series of hourly wind observations and mixing depths. Monthly average fine particle organic compound concentrations that would prevail if the primary organic aerosol were transported without chemical reaction were computed for more than 100 organic compounds within an 80 km × 80 km modeling area centered over Los Angeles. The monthly average compound concentrations predicted by the transport model were compared to atmospheric measurements made at monitoring sites within the study area during 1982. The predicted seasonal variation and absolute values of the concentrations of the more stable compounds are found to be in reasonable agreement with the ambient observations. While model predictions for the higher molecular weight polycyclic aromatic hydrocarbons (PAH) are in agreement with ambient observations, lower molecular weight PAH show much higher predicted than measured atmospheric concentrations in the particle phase, indicating atmospheric decay by chemical reactions or evaporation from the particle phase. The atmospheric concentrations of dicarboxylic acids and aromatic polycarboxylic acids greatly exceed the contributions that

  8. Consolidation and atmospheric drying of fine oil sand tailings : Comparison of blind simulations and field scale results

    NARCIS (Netherlands)

    Vardon, P.J.; Yao, Y.; van Paassen, L.A.; van Tol, A.F.; Sego, D.C.; Wilson, G.W.; Beier, N.A.

    2016-01-01

    This paper presents a comparison between blind predictions of field tests of atmospheric drying of mature fine tailings (MFT) presented in IOSTC 2014 and field results. The numerical simulation of the consolidation and atmospheric drying of selfweight consolidating fine material is challenging and

  9. Setup of a Parameterized FE Model for the Die Roll Prediction in Fine Blanking using Artificial Neural Networks

    Science.gov (United States)

    Stanke, J.; Trauth, D.; Feuerhack, A.; Klocke, F.

    2017-09-01

    Die roll is a morphological feature of fine blanked sheared edges. The die roll reduces the functional part of the sheared edge. To compensate for the die roll thicker sheet metal strips and secondary machining must be used. However, in order to avoid this, the influence of various fine blanking process parameters on the die roll has been experimentally and numerically studied, but there is still a lack of knowledge on the effects of some factors and especially factor interactions on the die roll. Recent changes in the field of artificial intelligence motivate the hybrid use of the finite element method and artificial neural networks to account for these non-considered parameters. Therefore, a set of simulations using a validated finite element model of fine blanking is firstly used to train an artificial neural network. Then the artificial neural network is trained with thousands of experimental trials. Thus, the objective of this contribution is to develop an artificial neural network that reliably predicts the die roll. Therefore, in this contribution, the setup of a fully parameterized 2D FE model is presented that will be used for batch training of an artificial neural network. The FE model enables an automatic variation of the edge radii of blank punch and die plate, the counter and blank holder force, the sheet metal thickness and part diameter, V-ring height and position, cutting velocity as well as material parameters covered by the Hensel-Spittel model for 16MnCr5 (1.7131, AISI/SAE 5115). The FE model is validated using experimental trails. The results of this contribution is a FE model suitable to perform 9.623 simulations and to pass the simulated die roll width and height automatically to an artificial neural network.

  10. Simulations of fine structures on the zero field steps of Josephson tunnel junctions

    DEFF Research Database (Denmark)

    Scheuermann, M.; Chi, C. C.; Pedersen, Niels Falsig

    1986-01-01

    Fine structures on the zero field steps of long Josephson tunnel junctions are simulated for junctions with the bias current injected into the junction at the edges. These structures are due to the coupling between self-generated plasma oscillations and the traveling fluxon. The plasma oscillations...... are generated by the interaction of the bias current with the fluxon at the junction edges. On the first zero field step, the voltages of successive fine structures are given by Vn=[h-bar]/2e(2omegap/n), where n is an even integer. Applied Physics Letters is copyrighted by The American Institute of Physics....

  11. Mathematical and Computational Aspects Related to Soil Modeling and Simulation

    Science.gov (United States)

    2017-09-26

    and simulation challenges at the interface of applied math (homogenization, handling of discontinuous behavior, discrete vs. continuum representations...topics: a) Visco-elasto-plastic continuum models of geo-surface materials b) Discrete models of geo-surface materials (rocks/gravel/sand) c) Mixed...continuum- discrete representations. Coarse-graining and fine-graining mathematical formulations d) Multi-physics aspects related to the modeling of

  12. Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction

    KAUST Repository

    Ghasemi, Mohammadreza

    2015-02-23

    In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.

  13. GIS Modeling of Solar Neighborhood Potential at a Fine Spatiotemporal Resolution

    Directory of Open Access Journals (Sweden)

    Annie Chow

    2014-05-01

    Full Text Available This research presents a 3D geographic information systems (GIS modeling approach at a fine spatiotemporal resolution to assess solar potential for the development of smart net-zero energy communities. It is important to be able to accurately identify the key areas on the facades and rooftops of buildings that receive maximum solar radiation, in order to prevent losses in solar gain due to obstructions from surrounding buildings and topographic features. A model was created in ArcGIS, in order to efficiently compute and iterate the hourly solar modeling and mapping process over a simulated year. The methodology was tested on a case study area located in southern Ontario, where two different 3D models of the site plan were analyzed. The accuracy of the work depends on the resolution and sky size of the input model. Future work is needed in order to create an efficient iterative function to speed the extraction process of the pixelated solar radiation data.

  14. Implications for new physics from fine-tuning arguments: II. Little Higgs models

    International Nuclear Information System (INIS)

    Casas, J.A.; Espinosa, J.R.; Hidalgo, I.

    2005-01-01

    We examine the fine-tuning associated to electroweak breaking in Little Higgs scenarios and find it to be always substantial and, generically, much higher than suggested by the rough estimates usually made. This is due to implicit tunings between parameters that can be overlooked at first glance but show up in a more systematic analysis. Focusing on four popular and representative Little Higgs scenarios, we find that the fine-tuning is essentially comparable to that of the Little Hierarchy problem of the Standard Model (which these scenarios attempt to solve) and higher than in supersymmetric models. This does not demonstrate that all Little Higgs models are fine-tuned, but stresses the need of a careful analysis of this issue in model-building before claiming that a particular model is not fine-tuned. In this respect we identify the main sources of potential fine-tuning that should be watched out for, in order to construct a successful Little Higgs model, which seems to be a non-trivial goal. (author)

  15. The fine-tuning cost of the likelihood in SUSY models

    International Nuclear Information System (INIS)

    Ghilencea, D.M.; Ross, G.G.

    2013-01-01

    In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γ i ={m 0 ,m 1/2 ,μ 0 ,A 0 ,B 0 ,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γ i }, that can be written as a surface integral of the ratio L/Δ, with the surface in γ i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χ new 2 =χ old 2 +2lnΔ. This shows the fine-tuning cost to the likelihood (χ new 2 ) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χ new 2 /d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ 2 /d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.

  16. [Evaluation of Cellular Effects Caused by Lunar Regolith Simulant Including Fine Particles].

    Science.gov (United States)

    Horie, Masanori; Miki, Takeo; Honma, Yoshiyuki; Aoki, Shigeru; Morimoto, Yasuo

    2015-06-01

    The National Aeronautics and Space Administration has announced a plan to establish a manned colony on the surface of the moon, and our country, Japan, has declared its participation. The surface of the moon is covered with soil called lunar regolith, which includes fine particles. It is possible that humans will inhale lunar regolith if it is brought into the spaceship. Therefore, an evaluation of the pulmonary effects caused by lunar regolith is important for exploration of the moon. In the present study, we examine the cellular effects of lunar regolith simulant, whose components are similar to those of lunar regolith. We focused on the chemical component and particle size in particular. The regolith simulant was fractionated to lunar regolith simulant such as cell membrane damage, induction of oxidative stress and proinflammatory effect.

  17. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  18. Projected future vegetation changes for the northwest United States and southwest Canada at a fine spatial resolution using a dynamic global vegetation model.

    Science.gov (United States)

    Shafer, Sarah; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.

    2015-01-01

    Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.

  19. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  20. Cobra-IE Evaluation by Simulation of the NUPEC BWR Full-Size Fine-Mesh Bundle Test (BFBT)

    International Nuclear Information System (INIS)

    Burns, C. J.; Aumiler, D.L.

    2006-01-01

    The COBRA-IE computer code is a thermal-hydraulic subchannel analysis program capable of simulating phenomena present in both PWRs and BWRs. As part of ongoing COBRA-IE assessment efforts, the code has been evaluated against experimental data from the NUPEC BWR Full-Size Fine-Mesh Bundle Tests (BFBT). The BFBT experiments utilized an 8 x 8 rod bundle to simulate BWR operating conditions and power profiles, providing an excellent database for investigation of the capabilities of the code. Benchmarks performed included steady-state and transient void distribution, single-phase and two-phase pressure drop, and steady-state and transient critical power measurements. COBRA-IE effectively captured the trends seen in the experimental data with acceptable prediction error. Future sensitivity studies are planned to investigate the effects of enabling and/or modifying optional code models dealing with void drift, turbulent mixing, rewetting, and CHF

  1. Evaluation of global fine-resolution precipitation products and their uncertainty quantification in ensemble discharge simulations

    Science.gov (United States)

    Qi, W.; Zhang, C.; Fu, G.; Sweetapple, C.; Zhou, H.

    2016-02-01

    The applicability of six fine-resolution precipitation products, including precipitation radar, infrared, microwave and gauge-based products, using different precipitation computation recipes, is evaluated using statistical and hydrological methods in northeastern China. In addition, a framework quantifying uncertainty contributions of precipitation products, hydrological models, and their interactions to uncertainties in ensemble discharges is proposed. The investigated precipitation products are Tropical Rainfall Measuring Mission (TRMM) products (TRMM3B42 and TRMM3B42RT), Global Land Data Assimilation System (GLDAS)/Noah, Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and a Global Satellite Mapping of Precipitation (GSMAP-MVK+) product. Two hydrological models of different complexities, i.e. a water and energy budget-based distributed hydrological model and a physically based semi-distributed hydrological model, are employed to investigate the influence of hydrological models on simulated discharges. Results show APHRODITE has high accuracy at a monthly scale compared with other products, and GSMAP-MVK+ shows huge advantage and is better than TRMM3B42 in relative bias (RB), Nash-Sutcliffe coefficient of efficiency (NSE), root mean square error (RMSE), correlation coefficient (CC), false alarm ratio, and critical success index. These findings could be very useful for validation, refinement, and future development of satellite-based products (e.g. NASA Global Precipitation Measurement). Although large uncertainty exists in heavy precipitation, hydrological models contribute most of the uncertainty in extreme discharges. Interactions between precipitation products and hydrological models can have the similar magnitude of contribution to discharge uncertainty as the hydrological models. A

  2. Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata

    Directory of Open Access Journals (Sweden)

    Yu Zou

    2013-07-01

    Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.

  3. Spatial Downscaling of TRMM Precipitation Using Geostatistics and Fine Scale Environmental Variables

    Directory of Open Access Journals (Sweden)

    No-Wook Park

    2013-01-01

    Full Text Available A geostatistical downscaling scheme is presented and can generate fine scale precipitation information from coarse scale Tropical Rainfall Measuring Mission (TRMM data by incorporating auxiliary fine scale environmental variables. Within the geostatistical framework, the TRMM precipitation data are first decomposed into trend and residual components. Quantitative relationships between coarse scale TRMM data and environmental variables are then estimated via regression analysis and used to derive trend components at a fine scale. Next, the residual components, which are the differences between the trend components and the original TRMM data, are then downscaled at a target fine scale via area-to-point kriging. The trend and residual components are finally added to generate fine scale precipitation estimates. Stochastic simulation is also applied to the residual components in order to generate multiple alternative realizations and to compute uncertainty measures. From an experiment using a digital elevation model (DEM and normalized difference vegetation index (NDVI, the geostatistical downscaling scheme generated the downscaling results that reflected detailed characteristics with better predictive performance, when compared with downscaling without the environmental variables. Multiple realizations and uncertainty measures from simulation also provided useful information for interpretations and further environmental modeling.

  4. Atomic quantum simulation of the lattice gauge-Higgs model: Higgs couplings and emergence of exact local gauge symmetry.

    Science.gov (United States)

    Kasamatsu, Kenichi; Ichinose, Ikuo; Matsui, Tetsuo

    2013-09-13

    Recently, the possibility of quantum simulation of dynamical gauge fields was pointed out by using a system of cold atoms trapped on each link in an optical lattice. However, to implement exact local gauge invariance, fine-tuning the interaction parameters among atoms is necessary. In the present Letter, we study the effect of violation of the U(1) local gauge invariance by relaxing the fine-tuning of the parameters and showing that a wide variety of cold atoms is still a faithful quantum simulator for a U(1) gauge-Higgs model containing a Higgs field sitting on sites. The clarification of the dynamics of this gauge-Higgs model sheds some light upon various unsolved problems, including the inflation process of the early Universe. We study the phase structure of this model by Monte Carlo simulation and also discuss the atomic characteristics of the Higgs phase in each simulator.

  5. Strained spiral vortex model for turbulent fine structure

    Science.gov (United States)

    Lundgren, T. S.

    1982-01-01

    A model for the intermittent fine structure of high Reynolds number turbulence is proposed. The model consists of slender axially strained spiral vortex solutions of the Navier-Stokes equation. The tightening of the spiral turns by the differential rotation of the induced swirling velocity produces a cascade of velocity fluctuations to smaller scale. The Kolmogorov energy spectrum is a result of this model.

  6. Pelletization of fine coals. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sastry, K.V.S.

    1995-12-31

    Coal is one of the most abundant energy resources in the US with nearly 800 million tons of it being mined annually. Process and environmental demands for low-ash, low-sulfur coals and economic constraints for high productivity are leading the coal industry to use such modern mining methods as longwall mining and such newer coal processing techniques as froth flotation, oil agglomeration, chemical cleaning and synthetic fuel production. All these processes are faced with one common problem area--fine coals. Dealing effectively with these fine coals during handling, storage, transportation, and/or processing continues to be a challenge facing the industry. Agglomeration by the unit operation of pelletization consists of tumbling moist fines in drums or discs. Past experimental work and limited commercial practice have shown that pelletization can alleviate the problems associated with fine coals. However, it was recognized that there exists a serious need for delineating the fundamental principles of fine coal pelletization. Accordingly, a research program has been carried involving four specific topics: (i) experimental investigation of coal pelletization kinetics, (ii) understanding the surface principles of coal pelletization, (iii) modeling of coal pelletization processes, and (iv) simulation of fine coal pelletization circuits. This report summarizes the major findings and provides relevant details of the research effort.

  7. Finite element analysis of the combined fine blanking and extrusion process

    Science.gov (United States)

    Zheng, Peng-Fei

    different slopes of the sloping wall extrusion were obtained, comparisons between the results from different deformation conditions were made. Valuable phenomena were obtained from the simulation and the deformation features were revealed in detail. Based on the simulated results, different deformation models were proposed. In order to verify the effectiveness of the simulated results, some experiments had been carried out. The method of performing the fine blanking experiment as well as the combined fine blanking and extrusion experiment on the conventional press was developed. Accordingly, the special equipment and the corresponding die set were designed and manufactured for the experiment. With the mesh etching method, the distorted meshes on the meridian plane of the specimens after deformation were obtained from the experiment. In order to calculate the effective strain distribution on the meridian plane of the specimen, large strain analysis technique was adopted and improved. Hardness distributions on the specimens were measured. The fracture on the side of the extruded part was analyzed. Both the simulated results and the experimental results were discussed. Comparison between the simulated results and the experimental ones showed that the simulated results agreed well with the experimental ones.

  8. Flow simulation in piping system dead legs using second moment, closure and k-epsilon model

    International Nuclear Information System (INIS)

    Deutsch, E.; Mechitoua, N.; Mattei, J.D.

    1996-01-01

    This paper deals with an industrial application of second moment closure turbulence model in in numerical simulation of 3D turbulent flows in piping system dead legs. Calculations performed with the 3D ESTET code are presented which contrast the performance of k-epsilon eddy viscosity model and second moment closure turbulence models. Coarse (100 000), medium (400 000) and fine (1 500 000) meshes were used. The second moment closure performs significantly better than eddy viscosity model and predicts with a good agreement the vortex penetration in dead legs provided to use sufficiently refined meshes. The results point out the necessity to be able to perform calculations using fine mesh before introducing refined physical models such as second moment closure turbulence model in a numerical code. This study illustrates the ability of second moment closure turbulence model to simulate 3D turbulent industrial flows. Reynolds stress model computation does not require special care, the calculation is carried on as simply as the k-ξ one. The CPU time needed is less that twice the CPU time needed using k-ξ model. (authors)

  9. Hydrological modelling of fine sediments in the Odzi River, Zimbabwe

    African Journals Online (AJOL)

    Hydrological modelling of fine sediments in the Odzi River, Zimbabwe. ... An analysis of the model structure and a comparison with the rating curve function ... model validation through split sample and proxy basin comparison was performed.

  10. Deep learning-based fine-grained car make/model classification for visual surveillance

    Science.gov (United States)

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  11. Mechanical Behavior Analysis of Y-Type S-SRC Column in a Large-Space Vertical Hybrid Structure Using Local Fine Numerical Simulation Method

    Directory of Open Access Journals (Sweden)

    Jianguang Yue

    2018-01-01

    Full Text Available In a large spatial structure, normally the important members are of special type and are the safety key for the global structure. In order to study the mechanical behavior details of the local member, it is difficult for the common test method to realize the complex spatial loading state of the local member. Therefore, a local-fine finite element model was proposed and a large-space vertical hybrid structure was numerically simulated. The seismic responses of the global structure and the Y-type S-SRC column were analyzed under El Centro seismic motions with the peak acceleration of 35 gal and 220 gal. The numerical model was verified with the results of the seismic shaking table test of the structure model. The failure mechanism and stiffness damage evolution of the Y-type S-SRC column were analyzed. The calculated results agreed well with the test results. It indicates that the local-fine FEM could reflect the mechanical details of the local members in a large spatial structure.

  12. Modelization and numerical simulation of atmospheric aerosols dynamics

    International Nuclear Information System (INIS)

    Debry, Edouard

    2004-01-01

    Chemical-transport models are now able to describe in a realistic way gaseous pollutants behavior in the atmosphere. Nevertheless atmospheric pollution also exists as a fine suspended particles, called aerosols which interact with gaseous phase, solar radiation, and have their own dynamic behavior. The goal of this thesis is the modelization and numerical simulation of the General Dynamic Equation of aerosols (GDE). Part I deals with some theoretical aspects of aerosol modelization. Part II is dedicated to the building of one size resolved aerosol model (SIREAM). In part III we perform the reduction of this model in order to use it in dispersion models as POLAIR3D. Several modelization issues are still opened: organic aerosol matter, externally mixed aerosols, coupling with turbulent mixing, and nano-particles. (author) [fr

  13. Theory and Examples of Mathematical Modeling for Fine Weave Pierced Fabric

    Directory of Open Access Journals (Sweden)

    ZHOU Yu-bo

    2017-04-01

    Full Text Available A mathematical abstraction and three-dimensional modeling method of three-dimensional woven fabric structure was developed for the fine weave pierced fabric, taking parametric continuity splines as the track function of tow. Based on the significant parameters of fine weave pierced fabric measured by MicroCT, eight kinds of the three-dimensional digital models of the fabric structure were established with two kinds of tow sections and four kinds of tow trajectory characteristic functions. There is a good agreement between the three-dimensional digital models and real fabric by comparing their structures and porosities. This mathematical abstraction and three-dimensional modeling method can be applied in micro models for sub unit cell and macro models for macroscopic scale fabrics, with high adaptability.

  14. Coupling of Large Eddy Simulations with Meteorological Models to simulate Methane Leaks from Natural Gas Storage Facilities

    Science.gov (United States)

    Prasad, K.

    2017-12-01

    Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and

  15. Initial fate of fine ash and sulfur from large volcanic eruptions

    Directory of Open Access Journals (Sweden)

    S. Self

    2009-11-01

    Full Text Available Large volcanic eruptions emit huge amounts of sulfur and fine ash into the stratosphere. These products cause an impact on radiative processes, temperature and wind patterns. In simulations with a General Circulation Model including detailed aerosol microphysics, the relation between the impact of sulfur and fine ash is determined for different eruption strengths and locations, one in the tropics and one in high Northern latitudes. Fine ash with effective radii between 1 μm and 15 μm has a lifetime of several days only. Nevertheless, the strong absorption of shortwave and long-wave radiation causes additional heating and cooling of ±20 K/day and impacts the evolution of the volcanic cloud. Depending on the location of the volcanic eruption, transport direction changes due to the presence of fine ash, vortices develop and temperature anomalies at ground increase. The results show substantial impact on the local scale but only minor impact on the evolution of sulfate in the stratosphere in the month after the simulated eruptions.

  16. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  17. Aviation Model: A Fine-Scale Numerical Weather Prediction System for Aviation Applications at the Hong Kong International Airport

    Directory of Open Access Journals (Sweden)

    Wai-Kin Wong

    2013-01-01

    Full Text Available The Hong Kong Observatory (HKO is planning to implement a fine-resolution Numerical Weather Prediction (NWP model for supporting the aviation weather applications at the Hong Kong International Airport (HKIA. This new NWP model system, called Aviation Model (AVM, is configured at a horizontal grid spacing of 600 m and 200 m. It is based on the WRF-ARW (Advance Research WRF model that can have sufficient computation efficiency in order to produce hourly updated forecasts up to 9 hours ahead on a future high performance computer system with theoretical peak performance of around 10 TFLOPS. AVM will be nested inside the operational mesoscale NWP model of HKO with horizontal resolution of 2 km. In this paper, initial numerical experiment results in forecast of windshear events due to seabreeze and terrain effect are discussed. The simulation of sea-breeze-related windshear is quite successful, and the headwind change observed from flight data could be reproduced in the model forecast. Some impacts of physical processes on generating the fine-scale wind circulation and development of significant convection are illustrated. The paper also discusses the limitations in the current model setup and proposes methods for the future development of AVM.

  18. Crop Yield Simulations Using Multiple Regional Climate Models in the Southwestern United States

    Science.gov (United States)

    Stack, D.; Kafatos, M.; Kim, S.; Kim, J.; Walko, R. L.

    2013-12-01

    Agricultural productivity (described by crop yield) is strongly dependent on climate conditions determined by meteorological parameters (e.g., temperature, rainfall, and solar radiation). California is the largest producer of agricultural products in the United States, but crops in associated arid and semi-arid regions live near their physiological limits (e.g., in hot summer conditions with little precipitation). Thus, accurate climate data are essential in assessing the impact of climate variability on agricultural productivity in the Southwestern United States and other arid regions. To address this issue, we produced simulated climate datasets and used them as input for the crop production model. For climate data, we employed two different regional climate models (WRF and OLAM) using a fine-resolution (8km) grid. Performances of the two different models are evaluated in a fine-resolution regional climate hindcast experiment for 10 years from 2001 to 2010 by comparing them to the North American Regional Reanalysis (NARR) dataset. Based on this comparison, multi-model ensembles with variable weighting are used to alleviate model bias and improve the accuracy of crop model productivity over large geographic regions (county and state). Finally, by using a specific crop-yield simulation model (APSIM) in conjunction with meteorological forcings from the multi-regional climate model ensemble, we demonstrate the degree to which maize yields are sensitive to the regional climate in the Southwestern United States.

  19. Comparative effects of simulated acid rain of different ratios of SO42- to NO3- on fine root in subtropical plantation of China.

    Science.gov (United States)

    Liu, Xin; Zhao, Wenrui; Meng, Miaojing; Fu, Zhiyuan; Xu, Linhao; Zha, Yan; Yue, Jianmin; Zhang, Shuifeng; Zhang, Jinchi

    2018-03-15

    The influence of acid rain on forest trees includes direct effects on foliage as well as indirect soil-mediated effects that cause a reduction in fine-root growth. In addition, the concentration of NO 3 - in acid rain increases with the rapidly growing of nitrogen deposition. In this study, we investigated the impact of simulated acid rain with different SO 4 2- /NO 3 - (S/N) ratios, which were 5:1 (S), 1:1 (SN) and 1:5 (N), on fine-root growth from March 2015 to February 2016. Results showed that fine roots were more sensitive to the effects of acid rain than soils in the short-term. Both soil pH and fine root biomass (FRB) significantly decreased as acid rain pH decreased, and also decreased with the percentage of NO 3 - increased in acid rain. Acid rain pH significantly influenced soil total carbon and available potassium in summer. Higher acidity level (pH=2.5), especially of the N treatments, had the strongest inhibitory impact on soil microbial activity after summer. The structural equation modelling results showed that acid rain S/N ratio and pH had stronger direct effects on FRB than indirect effects via changed soil and fine root properties. Fine-root element contents and antioxidant enzymes activities were significantly affected by acid rain S/N ratio and pH during most seasons. Fine-root Al ion content, Ca/Al, Mg/Al ratios and catalase activity were used as better indicators than soil parameters for evaluating the effects of different acid rain S/N ratios and pH on forests. Our results suggest that the ratio of SO 4 2- to NO 3 - in acid rain is an important factor which could affect fine-root growth in subtropical forests of China. Copyright © 2017. Published by Elsevier B.V.

  20. Fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.E. (Purdue Univ., Lafayette, IN (USA). Dept. of Physics)

    1983-04-28

    We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes.

  1. Modelling and numerical simulation of the General Dynamic Equation of aerosols; Modelisation et simulation des aerosols atmospheriques

    Energy Technology Data Exchange (ETDEWEB)

    Debry, E.

    2005-01-15

    Chemical-transport models are now able to describe in a realistic way gaseous pollutants behavior in the atmosphere. Nevertheless atmospheric pollution also exists as fine suspended particles, called aerosols, which interact with gaseous phase, solar radiation, and have their own dynamic behavior. The goal of this thesis is the modelling and numerical simulation of the General Dynamic Equation of aerosols (GDE). Part I deals with some theoretical aspects of aerosol modelling. Part II is dedicated to the building of one size resolved aerosol model (SIREAM). In part III we perform the reduction of this model in order to use it in dispersion models as POLAIR3D. Several modelling issues are still opened: organic aerosol matter, externally mixed aerosols, coupling with turbulent mixing, and nano-particles. (author)

  2. Simulation and Modeling of Flow in a Gas Compressor

    Directory of Open Access Journals (Sweden)

    Anna Avramenko

    2015-01-01

    Full Text Available The presented research demonstrates the results of a series of numerical simulations of gas flow through a single-stage centrifugal compressor with a vaneless diffuser. Numerical results were validated with experiments consisting of eight regimes with different mass flow rates. The steady-state and unsteady simulations were done in ANSYS FLUENT 13.0 and NUMECA FINE/TURBO 8.9.1 for one-period geometry due to periodicity of the problem. First-order discretization is insufficient due to strong dissipation effects. Results obtained with second-order discretization agree with the experiments for the steady-state case in the region of high mass flow rates. In the area of low mass flow rates, nonstationary effects significantly influence the flow leading stationary model to poor prediction. Therefore, the unsteady simulations were performed in the region of low mass flow rates. Results of calculation were compared with experimental data. The numerical simulation method in this paper can be used to predict compressor performance.

  3. Constructing a consumption model of fine dining from the perspective of behavioral economics

    Science.gov (United States)

    Tsai, Sang-Bing

    2018-01-01

    Numerous factors affect how people choose a fine dining restaurant, including food quality, service quality, food safety, and hedonic value. A conceptual framework for evaluating restaurant selection behavior has not yet been developed. This study surveyed 150 individuals with fine dining experience and proposed the use of mental accounting and axiomatic design to construct a consumer economic behavior model. Linear and logistic regressions were employed to determine model correlations and the probability of each factor affecting behavior. The most crucial factor was food quality, followed by service and dining motivation, particularly regarding family dining. Safe ingredients, high cooking standards, and menu innovation all increased the likelihood of consumers choosing fine dining restaurants. PMID:29641554

  4. Constructing a consumption model of fine dining from the perspective of behavioral economics.

    Science.gov (United States)

    Hsu, Sheng-Hsun; Hsiao, Cheng-Fu; Tsai, Sang-Bing

    2018-01-01

    Numerous factors affect how people choose a fine dining restaurant, including food quality, service quality, food safety, and hedonic value. A conceptual framework for evaluating restaurant selection behavior has not yet been developed. This study surveyed 150 individuals with fine dining experience and proposed the use of mental accounting and axiomatic design to construct a consumer economic behavior model. Linear and logistic regressions were employed to determine model correlations and the probability of each factor affecting behavior. The most crucial factor was food quality, followed by service and dining motivation, particularly regarding family dining. Safe ingredients, high cooking standards, and menu innovation all increased the likelihood of consumers choosing fine dining restaurants.

  5. Numerical simulation of fine oil sand tailings drying in test cells

    NARCIS (Netherlands)

    Vardon, P.J.; Nijssen, T.; Yao, Y.; Van Tol, A.F.

    2014-01-01

    As a promising technology in disposal of mature fine tailings (MFT), atmospheric fines drying (AFD) is currently being implemented on a commercial scale at Shell Canada’s Muskeg River Mine near Fort McMurray, Alberta. AFD involves the use of a polymer flocculent to bind fine particles in MFT

  6. Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media

    Directory of Open Access Journals (Sweden)

    Jun Li

    2017-01-01

    Full Text Available An upscaled Lattice Boltzmann Method (LBM for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.

  7. Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media

    KAUST Repository

    Li, Jun

    2017-02-16

    An upscaled Lattice Boltzmann Method (LBM) for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution) and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.

  8. Developpement D'un Modele Climatique Regional: Fizr Simulation des Conditions de Janvier de la Cote Ouest Nord Americaine

    Science.gov (United States)

    Goyette, Stephane

    1995-11-01

    Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale

  9. The fine-tuning cost of the likelihood in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...

  10. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    Science.gov (United States)

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful

  11. Parameter and model uncertainty in a life-table model for fine particles (PM2.5: a statistical modeling study

    Directory of Open Access Journals (Sweden)

    Jantunen Matti J

    2007-08-01

    Full Text Available Abstract Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5 are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i plausibility of mortality outcomes and (ii lag, and parameter uncertainties (iii exposure-response coefficients for different mortality outcomes, and (iv exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure

  12. Incorporation of Fine-Grained Sediment Erodibility Measurements into Sediment Transport Modeling, Capitol Lake, Washington

    Science.gov (United States)

    Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig

    2008-01-01

    Capitol Lake was created in 1951 with the construction of a concrete dam and control gate that prevented salt-water intrusion into the newly formed lake and regulated flow of the Deschutes River into southern Puget Sound. Physical processes associated with the former tidally dominated estuary were altered, and the dam structure itself likely caused an increase in retention of sediment flowing into the lake from the Deschutes River. Several efforts to manage sediment accumulation in the lake, including dredging and the construction of sediment traps upriver, failed to stop the lake from filling with sediment. The Deschutes Estuary Feasibility Study (DEFS) was carried out to evaluate the possibility of removing the dam and restoring estuarine processes as an alternative ongoing lake management. An important component of DEFS was the creation of a hydrodynamic and sediment transport model of the restored Deschutes Estuary. Results from model simulations indicated that estuarine processes would be restored under each of four restoration alternatives, and that over time, the restored estuary would have morphological features similar to the predam estuary. The model also predicted that after dam-removal, a large portion of the sediment eroded from the lake bottom would be deposited near the Port of Olympia and a marina located in lower Budd Inlet seaward of the present dam. The volume of sediment transported downstream was a critical piece of information that managers needed to estimate the total cost of the proposed restoration project. However, the ability of the model to predict the magnitude of sediment transport in general and, in particular, the volume of sediment deposition in the port and marina was limited by a lack of information on the erodibility of fine-grained sediments in Capitol Lake. Cores at several sites throughout Capitol Lake were collected between October 31 and November 1, 2007. The erodibility of sediments in the cores was later determined in the

  13. Simple Model with Time-Varying Fine-Structure ``Constant''

    Science.gov (United States)

    Berman, M. S.

    2009-10-01

    Extending the original version written in colaboration with L.A. Trevisan, we study the generalisation of Dirac's LNH, so that time-variation of the fine-structure constant, due to varying electrical and magnetic permittivities is included along with other variations (cosmological and gravitational ``constants''), etc. We consider the present Universe, and also an inflationary scenario. Rotation of the Universe is a given possibility in this model.

  14. The fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models

    International Nuclear Information System (INIS)

    Foda, O.E.

    1983-01-01

    We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes. (orig.)

  15. Rotational and fine structure of open-shell molecules in nearly degenerate electronic states

    Science.gov (United States)

    Liu, Jinjun

    2018-03-01

    An effective Hamiltonian without symmetry restriction has been developed to model the rotational and fine structure of two nearly degenerate electronic states of an open-shell molecule. In addition to the rotational Hamiltonian for an asymmetric top, this spectroscopic model includes the energy separation between the two states due to difference potential and zero-point energy difference, as well as the spin-orbit (SO), Coriolis, and electron spin-molecular rotation (SR) interactions. Hamiltonian matrices are computed using orbitally and fully symmetrized case (a) and case (b) basis sets. Intensity formulae and selection rules for rotational transitions between a pair of nearly degenerate states and a nondegenerate state have also been derived using all four basis sets. It is demonstrated using real examples of free radicals that the fine structure of a single electronic state can be simulated with either a SR tensor or a combination of SO and Coriolis constants. The related molecular constants can be determined precisely only when all interacting levels are simulated simultaneously. The present study suggests that analysis of rotational and fine structure can provide quantitative insights into vibronic interactions and related effects.

  16. Development of a simulation model of semi-active suspension for monorail

    Science.gov (United States)

    Hasnan, K.; Didane, D. H.; Kamarudin, M. A.; Bakhsh, Qadir; Abdulmalik, R. E.

    2016-11-01

    The new Kuala Lumpur Monorail Fleet Expansion Project (KLMFEP) uses semiactive technology in its suspension system. It is recognized that the suspension system influences the ride quality. Thus, among the way to further improve the ride quality is by fine- tuning the semi-active suspension system on the new KL Monorail. The semi-active suspension for the monorail specifically in terms of improving ride quality could be exploited further. Hence a simulation model which will act as a platform to test the design of a complete suspension system particularly to investigate the ride comfort performance is required. MSC Adams software was considered as the tool to develop the simulation platform, where all parameters and data are represented by mathematical equations; whereas the new KL Monorail being the reference model. In the simulation, the model went through step disturbance on the guideway for stability and ride comfort analysis. The model has shown positive results where the monorail is in stable condition as an outcome from stability analysis. The model also scores a Rating 1 classification in ISO 2631 Ride Comfort performance which is very comfortable as an overall outcome from ride comfort analysis. The model is also adjustable, flexibile and understandable by the engineers within the field for the purpose of further development.

  17. Modeling and simulation of storm surge on Staten Island to understand inundation mitigation strategies

    Science.gov (United States)

    Kress, Michael E.; Benimoff, Alan I.; Fritz, William J.; Thatcher, Cindy A.; Blanton, Brian O.; Dzedzits, Eugene

    2016-01-01

    Hurricane Sandy made landfall on October 29, 2012, near Brigantine, New Jersey, and had a transformative impact on Staten Island and the New York Metropolitan area. Of the 43 New York City fatalities, 23 occurred on Staten Island. The borough, with a population of approximately 500,000, experienced some of the most devastating impacts of the storm. Since Hurricane Sandy, protective dunes have been constructed on the southeast shore of Staten Island. ADCIRC+SWAN model simulations run on The City University of New York's Cray XE6M, housed at the College of Staten Island, using updated topographic data show that the coast of Staten Island is still susceptible to tidal surge similar to those generated by Hurricane Sandy. Sandy hindcast simulations of storm surges focusing on Staten Island are in good agreement with observed storm tide measurements. Model results calculated from fine-scaled and coarse-scaled computational grids demonstrate that finer grids better resolve small differences in the topography of critical hydraulic control structures, which affect storm surge inundation levels. The storm surge simulations, based on post-storm topography obtained from high-resolution lidar, provide much-needed information to understand Staten Island's changing vulnerability to storm surge inundation. The results of fine-scale storm surge simulations can be used to inform efforts to improve resiliency to future storms. For example, protective barriers contain planned gaps in the dunes to provide for beach access that may inadvertently increase the vulnerability of the area.

  18. The Effect Of Fine Particle Migration On Void Ratio Of Gap Graded Soil

    Directory of Open Access Journals (Sweden)

    Mayssa Salem Flayh

    2017-12-01

    Full Text Available Soil is exposed to the migration of fine particles in some cases because of some conditions including excavation and the presence of a level of groundwater which is equal to the level of soil in this case and because of the existence of this water leakage which would work on the migration of fine particles in the soil. This migration of fine particles will change the structure of the soil and change its properties. In this study we will know the change in the properties of the fouling soil due to the migration of fine particles and four types of soil. The first type does not contain fine particles and the second type the third and the fourth contains 10 20 30 granules respectively and tests were carried out for these soils Atterberg limits sieve analysis specific gravity shear resistance permeability modified Procter consolidation. A model was created to simulate the reality of soil exposed to excavations. Three levels were selected in the model to compare the results of each of the four soils under study. The total number of models 24 model through laboratory work obtained the initial and final voids ratio before and after aft the initial and final voids ratio er the particles migration. After these tests it was found that the migration of granules clearly affects the increase in the voids ratio.

  19. The sedimentation of fine particles in liquid foams

    OpenAIRE

    Rouyer , Florence; Fritz , Christelle; Pitois , Olivier

    2010-01-01

    International audience; We investigate the sedimentation of fine particles in liquid channels of foams. The study combines numerical simulations with experiments performed in foams and in isolated vertical foam channels. Results show that particulate motion is controlled by the confinement parameter (l) and the mobility of the channel surfaces modelled by interfacial shear viscosity. Interestingly, whereas the position of the particle within the channel cross-section is expected to be a relev...

  20. A Pore Scale Flow Simulation of Reconstructed Model Based on the Micro Seepage Experiment

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-01-01

    Full Text Available Researches on microscopic seepage mechanism and fine description of reservoir pore structure play an important role in effective development of low and ultralow permeability reservoir. The typical micro pore structure model was established by two ways of the conventional model reconstruction method and the built-in graphics function method of Comsol® in this paper. A pore scale flow simulation was conducted on the reconstructed model established by two different ways using creeping flow interface and Brinkman equation interface, respectively. The results showed that the simulation of the two models agreed well in the distribution of velocity, pressure, Reynolds number, and so on. And it verified the feasibility of the direct reconstruction method from graphic file to geometric model, which provided a new way for diversifying the numerical study of micro seepage mechanism.

  1. Simulating faults and plate boundaries with a transversely isotropic plasticity model

    Science.gov (United States)

    Sharples, W.; Moresi, L. N.; Velic, M.; Jadamec, M. A.; May, D. A.

    2016-03-01

    In mantle convection simulations, dynamically evolving plate boundaries have, for the most part, been represented using an visco-plastic flow law. These systems develop fine-scale, localized, weak shear band structures which are reminiscent of faults but it is a significant challenge to resolve the large- and the emergent, small-scale-behavior. We address this issue of resolution by taking into account the observation that a rock element with embedded, planar, failure surfaces responds as a non-linear, transversely isotropic material with a weak orientation defined by the plane of the failure surface. This approach partly accounts for the large-scale behavior of fine-scale systems of shear bands which we are not in a position to resolve explicitly. We evaluate the capacity of this continuum approach to model plate boundaries, specifically in the context of subduction models where the plate boundary interface has often been represented as a planar discontinuity. We show that the inclusion of the transversely isotropic plasticity model for the plate boundary promotes asymmetric subduction from initiation. A realistic evolution of the plate boundary interface and associated stresses is crucial to understanding inter-plate coupling, convergent margin driven topography, and earthquakes.

  2. Photoionization modeling of the LWS fine-structure lines in IR bright galaxies

    Science.gov (United States)

    Satyapal, S.; Luhman, M. L.; Fischer, J.; Greenhouse, M. A.; Wolfire, M. G.

    1997-01-01

    The long wavelength spectrometer (LWS) fine structure line spectra from infrared luminous galaxies were modeled using stellar evolutionary synthesis models combined with photoionization and photodissociation region models. The calculations were carried out by using the computational code CLOUDY. Starburst and active galactic nuclei models are presented. The effects of dust in the ionized region are examined.

  3. The standard model and the fine structure constant at Planck distances in Bennet-Brene-Nielsen-Picek random dynamics

    International Nuclear Information System (INIS)

    Laperashvili, L.V.

    1994-01-01

    An overview of papers by Nielson, Bennet, Brene, and Picek, forming the basis of the model called random dynamics, is given in the first part of this work. The fine structure constant is calculated in the second part of this work by using the technique of path integration in the U(1) lattice gauge theory. It is shown that α U(1),crit -1 ∼ 19.8. This value is in agreement with the prediction of random dynamics. The obtained results are compared with the results of Monte Carlo simulations. 20 refs., 3 figs., 1 tab

  4. Modeling of episodic particulate matter events using a 3-D air quality model with fine grid: Applications to a pair of cities in the US/Mexico border

    Science.gov (United States)

    Choi, Yu-Jin; Hyde, Peter; Fernando, H. J. S.

    High (episodic) particulate matter (PM) events over the sister cities of Douglas (AZ) and Agua Prieta (Sonora), located in the US-Mexico border, were simulated using the 3D Eulerian air quality model, MODELS-3/CMAQ. The best available input information was used for the simulations, with pollution inventory specified on a fine grid. In spite of inherent uncertainties associated with the emission inventory as well as the chemistry and meteorology of the air quality simulation tool, model evaluations showed acceptable PM predictions, while demonstrating the need for including the interaction between meteorology and emissions in an interactive mode in the model, a capability currently unavailable in MODELS-3/CMAQ when dealing with PM. Sensitivity studies on boundary influence indicate an insignificant regional (advection) contribution of PM to the study area. The contribution of secondary particles to the occurrence of high PM events was trivial. High PM episodes in the study area, therefore, are purely local events that largely depend on local meteorological conditions. The major PM emission sources were identified as vehicular activities on unpaved/paved roads and wind-blown dust. The results will be of immediate utility in devising PM mitigation strategies for the study area, which is one of the US EPA-designated non-attainment areas with respect to PM.

  5. Modelling Regional Surface Energy Exchange and Boundary Layer Development in Boreal Sweden — Comparison of Mesoscale Model (RAMS Simulations with Aircraft and Tower Observations

    Directory of Open Access Journals (Sweden)

    Meelis Mölder

    2012-10-01

    Full Text Available Simulation of atmospheric and surface processes with an atmospheric model (RAMS during a period of ten days in August 2001 over a boreal area in Sweden were compared to tower measurements and aircraft measurements of vertical profiles as well as surface fluxes from low altitude flights. The shape of the vertical profiles was simulated reasonably well by the model although there were significant biases in absolute values. Surface fluxes were less well simulated and the model showed considerable sensitivity to initial soil moisture conditions. The simulations were performed using two different land cover databases, the original one supplied with the RAMS model and the more detailed CORINE database. The two different land cover data bases resulted in relatively large fine scale differences in the simulated values. The conclusion of this study is that RAMS has the potential to be used as a tool to estimate boundary layer conditions and surface fluxes and meteorology over a boreal area but also that further improvement is needed.

  6. Microbial and Organic Fine Particle Transport Dynamics in Streams - a Combined Experimental and Stochastic Modeling Approach

    Science.gov (United States)

    Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron

    2014-05-01

    Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling

  7. Investigation of modeling and simulation on a PWR power conversion system with RELAP5

    International Nuclear Information System (INIS)

    Rui Gao; Yang Yanhua; Lin Meng; Yuan Minghao; Xie Zhengrui

    2007-01-01

    Based on the power conversion system of nuclear and conventional islands of Dayabay nuclear power station, this paper models the thermal-hydraulic systems for PWR by using the best-estimate program, RELAP5. To simulate the full-scope power conversion system, not only the reactor coolant system (RCP) of nuclear island, but also the main steam system (VVP), turbine steam and drain system (GPV), bypass system (GCT), feedwater system (FW), condensate extraction system (CEX), moisture separator reheater system (GSS), turbine-driven feedwater pump (APP), low-pressure and high-pressure feedwater heater systems (ABP and AHP) of conventional island are considered and modeled. A comparison between the simulated results and the actual data of reactor under full-power demonstrates a fine match for Dayabay, and also manifests the feasibility in simulating full-scope power conversion system of PWR with RELAP5. (author)

  8. A novel approach to finely tuned supersymmetric standard models: The case of the non-universal Higgs mass model

    Science.gov (United States)

    Yamaguchi, Masahiro; Yin, Wen

    2018-02-01

    Discarding the prejudice about fine tuning, we propose a novel and efficient approach to identify relevant regions of fundamental parameter space in supersymmetric models with some amount of fine tuning. The essential idea is the mapping of experimental constraints at a low-energy scale, rather than the parameter sets, to those of the fundamental parameter space. Applying this method to the non-universal Higgs mass model, we identify a new interesting superparticle mass pattern where some of the first two generation squarks are light whilst the stops are kept heavy as 6 TeV. Furthermore, as another application of this method, we show that the discrepancy of the muon anomalous magnetic dipole moment can be filled by a supersymmetric contribution within the 1{σ} level of the experimental and theoretical errors, which was overlooked by previous studies due to the extremely fine tuning required.

  9. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  10. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    Science.gov (United States)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)1232.0.CO;2.

  11. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  12. Body Fineness Ratio as a Predictor of Maximum Prolonged-Swimming Speed in Coral Reef Fishes

    Science.gov (United States)

    Walker, Jeffrey A.; Alfaro, Michael E.; Noble, Mae M.; Fulton, Christopher J.

    2013-01-01

    The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at U max. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax. For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax. For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax, which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies. PMID:24204575

  13. Modelling the fine and coarse fraction of heavy metals in Spain

    Science.gov (United States)

    García Vivanco, Marta; González, M. Angeles

    2014-05-01

    Heavy metals, such as cadmium, lead, nickel, arsenic, copper, chrome, zinc and selenium, are present in the air due to natural and anthropogenic emissions, normally joined to particles. These metals can affect life organisms via inhalation or ingestion, causing damages in human health and ecosystems. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT (Madrid, Spain) modelling group. Vivanco et al. (2011) and González et al. (2012) showed the model performance for some metals in Spain and Europe. However, in these studies, metals were considered as fine particles. Some studies based on observed heavy metals air concentration indicate the presence of metals also in the coarse fraction, in special for Cu and Zn. For this reason, a new attempt of modelling metals considering a fine (Arsenic, Lead, Cadmium and Nickel Ambient Air Concentrations in Spain, 2011. Proceedings of the 11 th International Conference on Computational Science and Its Applications (ICCSA 11) 243-246 - González, Ma Vivanco, Marta; Palomino, Inmaculada; Garrido, Juan; Santiago, Manuel; Bessagnet, Bertrand Modelling Some Heavy Metals Air Concentration in Europe. // Water, Air & Soil Pollution;Sep2012, Vol. 223 Issue 8, p5227

  14. Transport of reservoir fines

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander; Stenby, Erling Halfdan

    Modeling transport of reservoir fines is of great importance for evaluating the damage of production wells and infectivity decline. The conventional methodology accounts for neither the formation heterogeneity around the wells nor the reservoir fines’ heterogeneity. We have developed an integral...... dispersion equation in modeling the transport and the deposition of reservoir fines. It successfully predicts the unsymmetrical concentration profiles and the hyperexponential deposition in experiments....

  15. Impact of vehicular emissions on the formation of fine particles in the Sao Paulo Metropolitan Area: a numerical study with the WRF-Chem model

    Directory of Open Access Journals (Sweden)

    A. Vara-Vela

    2016-01-01

    Full Text Available The objective of this work is to evaluate the impact of vehicular emissions on the formation of fine particles (PM2.5;  ≤  2.5 µm in diameter in the Sao Paulo Metropolitan Area (SPMA in Brazil, where ethanol is used intensively as a fuel in road vehicles. The Weather Research and Forecasting with Chemistry (WRF-Chem model, which simulates feedbacks between meteorological variables and chemical species, is used as a photochemical modelling tool to describe the physico-chemical processes leading to the evolution of number and mass size distribution of particles through gas-to-particle conversion. A vehicular emission model based on statistical information of vehicular activity is applied to simulate vehicular emissions over the studied area. The simulation has been performed for a 1-month period (7 August–6 September 2012 to cover the availability of experimental data from the NUANCE-SPS (Narrowing the Uncertainties on Aerosol and Climate Changes in Sao Paulo State project that aims to characterize emissions of atmospheric aerosols in the SPMA. The availability of experimental measurements of atmospheric aerosols and the application of the WRF-Chem model made it possible to represent some of the most important properties of fine particles in the SPMA such as the mass size distribution and chemical composition, besides allowing us to evaluate its formation potential through the gas-to-particle conversion processes. Results show that the emission of primary gases, mostly from vehicles, led to a production of secondary particles between 20 and 30 % in relation to the total mass concentration of PM2.5 in the downtown SPMA. Each of PM2.5 and primary natural aerosol (dust and sea salt contributed with 40–50 % of the total PM10 (i.e. those  ≤  10 µm in diameter concentration. Over 40 % of the formation of fine particles, by mass, was due to the emission of hydrocarbons, mainly aromatics. Furthermore, an increase in the

  16. Impact of vehicular emissions on the formation of fine particles in the Sao Paulo Metropolitan Area: a numerical study with the WRF-Chem model

    Science.gov (United States)

    Vara-Vela, A.; Andrade, M. F.; Kumar, P.; Ynoue, R. Y.; Muñoz, A. G.

    2016-01-01

    The objective of this work is to evaluate the impact of vehicular emissions on the formation of fine particles (PM2.5; ≤ 2.5 µm in diameter) in the Sao Paulo Metropolitan Area (SPMA) in Brazil, where ethanol is used intensively as a fuel in road vehicles. The Weather Research and Forecasting with Chemistry (WRF-Chem) model, which simulates feedbacks between meteorological variables and chemical species, is used as a photochemical modelling tool to describe the physico-chemical processes leading to the evolution of number and mass size distribution of particles through gas-to-particle conversion. A vehicular emission model based on statistical information of vehicular activity is applied to simulate vehicular emissions over the studied area. The simulation has been performed for a 1-month period (7 August-6 September 2012) to cover the availability of experimental data from the NUANCE-SPS (Narrowing the Uncertainties on Aerosol and Climate Changes in Sao Paulo State) project that aims to characterize emissions of atmospheric aerosols in the SPMA. The availability of experimental measurements of atmospheric aerosols and the application of the WRF-Chem model made it possible to represent some of the most important properties of fine particles in the SPMA such as the mass size distribution and chemical composition, besides allowing us to evaluate its formation potential through the gas-to-particle conversion processes. Results show that the emission of primary gases, mostly from vehicles, led to a production of secondary particles between 20 and 30 % in relation to the total mass concentration of PM2.5 in the downtown SPMA. Each of PM2.5 and primary natural aerosol (dust and sea salt) contributed with 40-50 % of the total PM10 (i.e. those ≤ 10 µm in diameter) concentration. Over 40 % of the formation of fine particles, by mass, was due to the emission of hydrocarbons, mainly aromatics. Furthermore, an increase in the number of small particles impaired the

  17. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  18. Acclimation of fine root respiration to soil warming involves starch deposition in very fine and fine roots: a case study in Fagus sylvatica saplings.

    Science.gov (United States)

    Di Iorio, Antonino; Giacomuzzi, Valentino; Chiatante, Donato

    2016-03-01

    Root activities in terms of respiration and non-structural carbohydrates (NSC) storage and mobilization have been suggested as major physiological roles in fine root lifespan. As more frequent heat waves and drought periods within the next decades are expected, to what extent does thermal acclimation in fine roots represent a mechanism to cope with such upcoming climatic conditions? In this study, the possible changes in very fine (diameter respiration rate and NSC [soluble sugars (SS) and starch] concentrations, were investigated on 2-year-old Fagus sylvatica saplings subjected to a simulated long-lasting heat wave event and to co-occurring soil drying. For both very fine and fine roots, soil temperature (ST) resulted inversely correlated with specific root length, respiration rates and SSs concentration, but directly correlated with root mass, root tissue density and starch concentration. In particular, starch concentration increased under 28 °C for successively decreasing under 21 °C ST. These findings showed that thermal acclimation in very fine and fine roots due to 24 days exposure to high ST (∼ 28 °C), induced starch accumulation. Such 'carbon-savings strategy' should bear the maintenance costs associated to the recovery process in case of restored favorable environmental conditions, such as those occurring at the end of a heat wave event. Drought condition seems to affect the fine root vitality much more under moderate than high temperature condition, making the temporary exposure to high ST less threatening to root vitality than expected. © 2015 Scandinavian Plant Physiology Society.

  19. Simulated root dynamics of a 160-year-old sugar maple (Acer saccharum Marsh.) tree with and without ozone exposure using the TREGRO model.

    Science.gov (United States)

    Retzlaff, W. A.; Weinstein, D. A.; Laurence, J. A.; Gollands, B.

    1996-01-01

    Because of difficulties in directly assessing root responses of mature forest trees exposed to atmospheric pollutants, we have used the model TREGRO to analyze the effects of a 3- and a 10-year exposure to ozone (O(3)) on root dynamics of a simulated 160-year-old sugar maple (Acer saccharum Marsh.) tree. We used existing phenological, allometric, and growth data to parameterize TREGRO to produce a simulated 160-year-old tree. Simulations were based on literature values for sugar maple fine root production and senescence and the photosynthetic responses of sugar maple seedlings exposed to O(3) in open-top chambers. In the simulated 3-year exposure to O(3), 2 x ambient atmospheric O(3) concentrations reduced net carbon (C) gain of the 160-year-old tree. This reduction occurred in the C storage pools (total nonstructural carbohydrate, TNC), with most of the reduction occurring in coarse (woody) roots. Total fine root production and senescence were unaffected by the simulated 3-year exposure to O(3). However, extending the simulated O(3) exposure period to 10 years depleted the TNC pools of the coarse roots and reduced total fine root production. Similar reductions in TNC pools have been observed in forest-grown sugar maple trees exhibiting symptoms of stress. We conclude that modeling can aid in evaluating the belowground response of mature forest trees to atmospheric pollution stress and could indicate the potential for gradual deterioration of tree health under conditions of long-term stress, a situation similar to that underlying the decline of sugar maple trees.

  20. An effective anisotropic poroelastic model for elastic wave propagation in finely layered media

    NARCIS (Netherlands)

    Kudarova, A.; van Dalen, K.N.; Drijkoningen, G.G.

    2016-01-01

    Mesoscopic-scale heterogeneities in porous media cause attenuation and dispersion at seismic frequencies. Effective models are often used to account for this. We have developed a new effective poroelastic model for finely layered media, and we evaluated its impact focusing on the angledependent

  1. Fine-Tuning Neural Patient Question Retrieval Model with Generative Adversarial Networks.

    Science.gov (United States)

    Tang, Guoyu; Ni, Yuan; Wang, Keqiang; Yong, Qin

    2018-01-01

    The online patient question and answering (Q&A) system attracts an increasing amount of users in China. Patient will post their questions and wait for doctors' response. To avoid the lag time involved with the waiting and to reduce the workload on the doctors, a better method is to automatically retrieve the semantically equivalent question from the archive. We present a Generative Adversarial Networks (GAN) based approach to automatically retrieve patient question. We apply supervised deep learning based approaches to determine the similarity between patient questions. Then a GAN framework is used to fine-tune the pre-trained deep learning models. The experiment results show that fine-tuning by GAN can improve the performance.

  2. Evolution of the fine-structure constant in runaway dilaton models

    Energy Technology Data Exchange (ETDEWEB)

    Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofísica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Instituto de Astrofísica e Ciências do Espaço, CAUP, Rua das Estrelas, 4150-762 Porto (Portugal); Vielzeuf, P.E., E-mail: pvielzeuf@ifae.es [Institut de Física d' Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra (Barcelona) (Spain); Martinelli, M., E-mail: martinelli@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, Philosophenweg 16, 69120, Heidelberg (Germany); Calabrese, E., E-mail: erminia.calabrese@astro.ox.ac.uk [Sub-department of Astrophysics, University of Oxford, Keble Road, Oxford OX1 3RH (United Kingdom); Pandolfi, S., E-mail: stefania@dark-cosmology.dk [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark)

    2015-04-09

    We study the detailed evolution of the fine-structure constant α in the string-inspired runaway dilaton class of models of Damour, Piazza and Veneziano. We provide constraints on this scenario using the most recent α measurements and discuss ways to distinguish it from alternative models for varying α. For model parameters which saturate bounds from current observations, the redshift drift signal can differ considerably from that of the canonical ΛCDM paradigm at high redshifts. Measurements of this signal by the forthcoming European Extremely Large Telescope (E-ELT), together with more sensitive α measurements, will thus dramatically constrain these scenarios.

  3. Evolution of the fine-structure constant in runaway dilaton models

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Vielzeuf, P.E.; Martinelli, M.; Calabrese, E.; Pandolfi, S.

    2015-01-01

    We study the detailed evolution of the fine-structure constant α in the string-inspired runaway dilaton class of models of Damour, Piazza and Veneziano. We provide constraints on this scenario using the most recent α measurements and discuss ways to distinguish it from alternative models for varying α. For model parameters which saturate bounds from current observations, the redshift drift signal can differ considerably from that of the canonical ΛCDM paradigm at high redshifts. Measurements of this signal by the forthcoming European Extremely Large Telescope (E-ELT), together with more sensitive α measurements, will thus dramatically constrain these scenarios

  4. Simulation of fine organic aerosols in the western Mediterranean area during the ChArMEx 2013 summer campaign

    Science.gov (United States)

    Cholakian, Arineh; Beekmann, Matthias; Colette, Augustin; Coll, Isabelle; Siour, Guillaume; Sciare, Jean; Marchand, Nicolas; Couvidat, Florian; Pey, Jorge; Gros, Valerie; Sauvage, Stéphane; Michoud, Vincent; Sellegri, Karine; Colomb, Aurélie; Sartelet, Karine; Langley DeWitt, Helen; Elser, Miriam; Prévot, André S. H.; Szidat, Sonke; Dulac, François

    2018-05-01

    The simulation of fine organic aerosols with CTMs (chemistry-transport models) in the western Mediterranean basin has not been studied until recently. The ChArMEx (the Chemistry-Aerosol Mediterranean Experiment) SOP 1b (Special Observation Period 1b) intensive field campaign in summer of 2013 gathered a large and comprehensive data set of observations, allowing the study of different aspects of the Mediterranean atmosphere including the formation of organic aerosols (OAs) in 3-D models. In this study, we used the CHIMERE CTM to perform simulations for the duration of the SAFMED (Secondary Aerosol Formation in the MEDiterranean) period (July to August 2013) of this campaign. In particular, we evaluated four schemes for the simulation of OA, including the CHIMERE standard scheme, the VBS (volatility basis set) standard scheme with two parameterizations including aging of biogenic secondary OA, and a modified version of the VBS scheme which includes fragmentation and formation of nonvolatile OA. The results from these four schemes are compared to observations at two stations in the western Mediterranean basin, located on Ersa, Cap Corse (Corsica, France), and at Cap Es Pinar (Mallorca, Spain). These observations include OA mass concentration, PMF (positive matrix factorization) results of different OA fractions, and 14C observations showing the fossil or nonfossil origins of carbonaceous particles. Because of the complex orography of the Ersa site, an original method for calculating an orographic representativeness error (ORE) has been developed. It is concluded that the modified VBS scheme is close to observations in all three aspects mentioned above; the standard VBS scheme without BSOA (biogenic secondary organic aerosol) aging also has a satisfactory performance in simulating the mass concentration of OA, but not for the source origin analysis comparisons. In addition, the OA sources over the western Mediterranean basin are explored. OA shows a major biogenic

  5. PROPERTIES AND MODELING OF UNRESOLVED FINE STRUCTURE LOOPS OBSERVED IN THE SOLAR TRANSITION REGION BY IRIS

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, David H. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States); Reep, Jeffrey W.; Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States)

    2016-08-01

    Recent observations from the Interface Region Imaging Spectrograph ( IRIS ) have discovered a new class of numerous low-lying dynamic loop structures, and it has been argued that they are the long-postulated unresolved fine structures (UFSs) that dominate the emission of the solar transition region. In this letter, we combine IRIS measurements of the properties of a sample of 108 UFSs (intensities, lengths, widths, lifetimes) with one-dimensional non-equilibrium ionization simulations, using the HYDRAD hydrodynamic model to examine whether the UFSs are now truly spatially resolved in the sense of being individual structures rather than being composed of multiple magnetic threads. We find that a simulation of an impulsively heated single strand can reproduce most of the observed properties, suggesting that the UFSs may be resolved, and the distribution of UFS widths implies that they are structured on a spatial scale of 133 km on average. Spatial scales of a few hundred kilometers appear to be typical for a range of chromospheric and coronal structures, and we conjecture that this could be an important clue for understanding the coronal heating process.

  6. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  7. The influence of model spatial resolution on simulated ozone and fine particulate matter for Europe: implications for health impact assessments

    Science.gov (United States)

    Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.

    2018-04-01

    We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when

  8. A new simulation model for assessing aircraft emergency evacuation considering passenger physical characteristics

    International Nuclear Information System (INIS)

    Liu, Yu; Wang, Weijie; Huang, Hong-Zhong; Li, Yanfeng; Yang, Yuanjian

    2014-01-01

    Conducting a real aircraft evacuation trial is oftentimes unaffordable as it is extremely expensive and may cause severe injury to participants. Simulation models as an alternative have been used to overcome the aforementioned issues in recent years. This paper proposes a new simulation model for emergency evacuation of civil aircraft. Its unique features and advantages over the existing models are twofold: (1) passengers' critical physical characteristics, e.g. waist size, gender, age, and disabilities, which impact the movement and egress time of individual evacuee from a statistical viewpoint, are taken into account in the new model. (2) Improvements are made to enhance the accuracy of the simulation model from three aspects. First, the staggered mesh discretization method together with the agent-based approach is utilized to simulate movements of individual passengers in an emergency evacuation process. Second, each node discretized to represent cabin space in the new model can contain more than one passenger if they are moving in the same direction. Finally, each individual passenger is able to change his/her evacuation route in a real-time manner based upon the distance from the current position to the target exit and the queue length. The effectiveness of the proposed simulation model is demonstrated on Boeing 767-300 aircraft. - Highlights: • A new simulation model of aircraft emergency evacuation is developed. • Some critical physical characteristics of passengers', e.g. waist size, gender, age, and disabilities, are taken into account in the new model. • An agent-based approach along with a multi-level fine network representation is used. • Passengers are able to change their evacuation routes in a real-time manner based upon distance and length of queue

  9. Numerical modelling of the dehydration of waste concrete fines : An attempt to close the recycling loop

    NARCIS (Netherlands)

    Teklay, Abraham; Vahidi, A.; Lotfi, Somayeh; Di Maio, F.; Rem, P.C.; Di Maio, F.; Lotfi, S.; Bakker, M.; Hu, M.; Vahidi, A.

    2017-01-01

    The ever-increasing interest on sustainable raw materials has urged the quest for recycled materials that can be used as a partial or total replacement of fine fractions in the production of concrete. This paper demonstrates a modelling study of recycled concrete waste fines and the possibility of

  10. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    Science.gov (United States)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  11. Explaining the spatiotemporal variation of fine particle number concentrations over Beijing and surrounding areas in an air quality model with aerosol microphysics

    International Nuclear Information System (INIS)

    Chen, Xueshun; Wang, Zifa; Li, Jie; Chen, Huansheng; Hu, Min; Yang, Wenyi; Wang, Zhe; Ge, Baozhu; Wang, Dawei

    2017-01-01

    In this study, a three-dimensional air quality model with detailed aerosol microphysics (NAQPMS + APM) was applied to simulate the fine particle number size distribution and to explain the spatiotemporal variation of fine particle number concentrations in different size ranges over Beijing and surrounding areas in the haze season (Jan 15 to Feb 13 in 2006). Comparison between observations and the simulation indicates that the model is able to reproduce the main features of the particle number size distribution. The high number concentration of total particles, up to 26600 cm −3 in observations and 39800 cm −3 in the simulation, indicates the severity of pollution in Beijing. We find that primary particles with secondary species coating and secondary particles together control the particle number size distribution. Secondary particles dominate particle number concentration in the nucleation mode. Primary and secondary particles together determine the temporal evolution and spatial pattern of particle number concentration in the Aitken mode. Primary particles dominate particle number concentration in the accumulation mode. Over Beijing and surrounding areas, secondary particles contribute at least 80% of particle number concentration in the nucleation mode but only 10–20% in the accumulation mode. Nucleation mode particles and accumulation mode particles are anti-phased with each other. Nucleation or primary emissions alone could not explain the formation of the particle number size distribution in Beijing. Nucleation has larger effects on ultrafine particles while primary particles emissions are efficient in producing large particles in the accumulation mode. Reduction in primary particle emissions does not always lead to a decrease in the number concentration of ultrafine particles. Measures to reduce fine particle pollution in terms of particle number concentration may be different from those addressing particle mass concentration. - Highlights:

  12. Fine modeling of energy exchanges between buildings and urban atmosphere

    International Nuclear Information System (INIS)

    Daviau-Pellegrin, Noelie

    2016-01-01

    This thesis work is about the effect of buildings on the urban atmosphere and more precisely the energetic exchanges that take place between these two systems. In order to model more finely the thermal effects of buildings on the atmospheric flows in simulations run under the CFD software Code-Saturne, we proceed to couple this tool with the building model BuildSysPro. This library is run under Dymola and can generate matrices describing the building thermal properties that can be used outside this software. In order to carry out the coupling, we use these matrices in a code that allows the building thermal calculations and the CFD to exchange their results. After a review about the physical phenomena and the existing models, we explain the interactions between the atmosphere and the urban elements, especially buildings. The latter can impact the air flows dynamically, as they act as obstacles, and thermally, through their surface temperatures. At first, we analyse the data obtained from the measurement campaign EM2PAU that we use in order to validate the coupled model. EM2PAU was carried out in Nantes in 2011 and represents a canyon street with two rows of four containers. Its distinctive feature lies in the simultaneous measurements of the air and wall temperatures as well as the wind speeds with anemometers located on a 10 m-high mast for the reference wind and on six locations in the canyon. This aims for studying the thermal influence of buildings on the air flows. Then the numerical simulations of the air flows in EM2PAU is carried out with different methods that allow us to calculate or impose the surface temperature we use for each of the container walls. The first method consists in imposing their temperatures from the measurements. For each wall, we set the temperature to the surface temperature that was measured during the EM2PAU campaign. The second method involves imposing the outdoor air temperature that was measured at a given time to all the

  13. Modeling of meteorology, chemistry and aerosol for the 2017 Utah Winter Fine Particle Study

    Science.gov (United States)

    McKeen, S. A.; Angevine, W. M.; McDonald, B.; Ahmadov, R.; Franchin, A.; Middlebrook, A. M.; Fibiger, D. L.; McDuffie, E. E.; Womack, C.; Brown, S. S.; Moravek, A.; Murphy, J. G.; Trainer, M.

    2017-12-01

    The Utah Winter Fine Particle Study (UWFPS-17) field project took place during January and February of 2017 within the populated region of the Great Salt Lake, Utah. The study focused on understanding the meteorology and chemistry associated with high particulate matter (PM) levels often observed near Salt Lake City during stable wintertime conditions. Detailed composition and meteorological observations were taken from the NOAA Twin-Otter aircraft and several surface sites during the study period, and extremely high aerosol conditions were encountered for two cold-pool episodes occurring in the last 2 weeks of January. A clear understanding of the photochemical and aerosol processes leading to these high PM events is still lacking. Here we present high spatiotemporal resolution simulations of meteorology, PM and chemistry over Utah from January 13 to February 1, 2017 using the WRF/Chem photochemical model. Correctly characterizing the meteorology is difficult due to the complex terrain and shallow inversion layers. We discuss the approach and limitations of the simulated meteorology, and evaluate low-level pollutant mixing using vertical profiles from missed airport approaches by the NOAA Twin-Otter performed routinely during each flight. Full photochemical simulations are calculated using NOx, ammonia and VOC emissions from the U.S. EPA NEI-2011 emissions inventory. Comparisons of the observed vertical column amounts of NOx, ammonia, aerosol nitrate and ammonium with model results shows the inventory estimates for ammonia emissions are low by a factor of four and NOx emissions are low by nearly a factor of two. The partitioning of both nitrate and NH3 between gas and particle phase depends strongly on the NH3 and NOx emissions to the model and calculated NOx to nitrate conversion rates. These rates are underestimated by gas-phase chemistry alone, even though surface snow albedo increases photolysis rates by nearly a factor of two. Several additional conversion

  14. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Science.gov (United States)

    Warwick-Evans, Victoria C; Atkinson, Philip W; Robinson, Leonie A; Green, Jonathan A

    2016-01-01

    During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  15. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Directory of Open Access Journals (Sweden)

    Victoria C Warwick-Evans

    Full Text Available During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  16. Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media

    KAUST Repository

    Li, Jun; Brown, Donald

    2017-01-01

    upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models

  17. Numerical modeling of fine particle fractal aggregates in turbulent flow

    Directory of Open Access Journals (Sweden)

    Cao Feifeng

    2015-01-01

    Full Text Available A method for prediction of fine particle transport in a turbulent flow is proposed, the interaction between particles and fluid is studied numerically, and fractal agglomerate of fine particles is analyzed using Taylor-expansion moment method. The paper provides a better understanding of fine particle dynamics in the evolved flows.

  18. Sintering of Fine Particles in Suspension Plasma Sprayed Coatings

    Directory of Open Access Journals (Sweden)

    Leszek Latka

    2010-07-01

    Full Text Available Suspension plasma spraying is a process that enables the production of finely grained nanometric or submicrometric coatings. The suspensions are formulated with the use of fine powder particles in water or alcohol with some additives. Subsequently, the suspension is injected into plasma jet and the liquid additives evaporate. The remaining fine solids are molten and subsequently agglomerate or remain solid, depending on their trajectory in the plasma jet. The coating’s microstructure results from these two groups of particles arriving on a substrate or previously deposited coating. Previous experimental studies carried out for plasma sprayed titanium oxide and hydroxyapatite coatings enabled us to observe either a finely grained microstructure or, when a different suspension injection mode was used, to distinguish two zones in the microstructure. These two zones correspond to the dense zone formed from well molten particles, and the agglomerated zone formed from fine solid particles that arrive on the substrate in a solid state. The present paper focuses on the experimental and theoretical analysis of the formation process of the agglomerated zone. The experimental section establishes the heat flux supplied to the coating during deposition. In order to achieve this, calorimetric measurements were made by applying experimental conditions simulating the real coatings’ growth. The heat flux was measured to be in the range from 0.08 to 0.5 MW/m2,depending on the experimental conditions. The theoretical section analyzes the sintering during the coating’s growth, which concerns the fine particles arriving on the substrate in the solid state. The models of volume, grain boundary and surface diffusion were analyzed and adapted to the size and chemistry of the grains, temperature and time scales corresponding to the suspension plasma spraying conditions. The model of surface diffusion was found to best describe the sintering during suspension

  19. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    Science.gov (United States)

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle

  20. On Basu's proposal: Fines affect bribes

    OpenAIRE

    Popov, Sergey V.

    2017-01-01

    I model the connection between the equilibrium bribe amount and the fines imposed on both bribe-taker and bribe-payer. I show that Basu's (2011) proposal to lower the fines imposed on bribe-payers in order to induce more whistleblowing and increase the probability of penalizing corrupt government officials might instead increase bribe amounts. Higher expected fines on bribe-takers will make them charge larger bribes; at the same time, lowering fines for bribe-paying might increase bribe-payer...

  1. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  2. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and mode-rator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  3. A Study on Bipedal and Mobile Robot Behavior Through Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Nirmala Nirmala

    2015-05-01

    Full Text Available The purpose of this work is to study and analyze mobile robot behavior. In performing this, a framework is adopted and developed for mobile and bipedal robot. The robots are design, build, and run as proceed from the development of mechanical structure, electronics and control integration, and control software application. The behavior of those robots are difficult to be observed and analyzed qualitatively. To evaluate the design and behavior quality, modeling and simulation of robot structure and its task capability is performed. The stepwise procedure to robot behavior study is explained. Behavior cases study are experimented to bipedal robots, transporter robot and Autonomous Guided Vehicle (AGV developed at our institution. The experimentation are conducted on those robots by adjusting their dynamic properties and/or surrounding environment. Validation is performed by comparing the simulation result and the real robot execution. The simulation gives a more idealistic behavior execution rather than realistic one. Adjustments are performed to fine tuning simulation's parameters to provide a more realistic performance.

  4. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...... than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior...

  5. A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS

    International Nuclear Information System (INIS)

    Clifford, I; Ivanov, K; Avramova, M.

    2011-01-01

    Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)

  6. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model

    Science.gov (United States)

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo

    2014-01-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition. PMID:25621085

  7. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model.

    Science.gov (United States)

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P; Russell, MacKenzie R; Jones, Robert M; King, Matt; Betterton, Eric A; Sáez, A Eduardo

    2014-09-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.

  8. Simulation of Near-Edge X-ray Absorption Fine Structure with Time-Dependent Equation-of-Motion Coupled-Cluster Theory.

    Science.gov (United States)

    Nascimento, Daniel R; DePrince, A Eugene

    2017-07-06

    An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.

  9. Multiscale eddy simulation for moist atmospheric convection: Preliminary investigation

    Energy Technology Data Exchange (ETDEWEB)

    Stechmann, Samuel N., E-mail: stechmann@wisc.edu [Department of Mathematics, University of Wisconsin-Madison (United States); Department of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison (United States)

    2014-08-15

    A multiscale computational framework is designed for simulating atmospheric convection and clouds. In this multiscale framework, large eddy simulation (LES) is used to model the coarse scales of 100 m and larger, and a stochastic, one-dimensional turbulence (ODT) model is used to represent the fine scales of 100 m and smaller. Coupled and evolving together, these two components provide a multiscale eddy simulation (MES). Through its fine-scale turbulence and moist thermodynamics, MES allows coarse grid cells to be partially cloudy and to encompass cloudy–clear air mixing on scales down to 1 m; in contrast, in typical LES such fine-scale processes are not represented or are parameterized using bulk deterministic closures. To illustrate MES and investigate its multiscale dynamics, a shallow cumulus cloud field is simulated. The fine-scale variability is seen to take a plausible form, with partially cloudy grid cells prominent near cloud edges and cloud top. From earlier theoretical work, this mixing of cloudy and clear air is believed to have an important impact on buoyancy. However, contrary to expectations based on earlier theoretical studies, the mean statistics of the bulk cloud field are essentially the same in MES and LES; possible reasons for this are discussed, including possible limitations in the present formulation of MES. One difference between LES and MES is seen in the coarse-scale turbulent kinetic energy, which appears to grow slowly in time due to incoherent stochastic fluctuations in the buoyancy. This and other considerations suggest the need for some type of spatial and/or temporal filtering to attenuate undersampling of the stochastic fine-scale processes.

  10. Multiscale eddy simulation for moist atmospheric convection: Preliminary investigation

    International Nuclear Information System (INIS)

    Stechmann, Samuel N.

    2014-01-01

    A multiscale computational framework is designed for simulating atmospheric convection and clouds. In this multiscale framework, large eddy simulation (LES) is used to model the coarse scales of 100 m and larger, and a stochastic, one-dimensional turbulence (ODT) model is used to represent the fine scales of 100 m and smaller. Coupled and evolving together, these two components provide a multiscale eddy simulation (MES). Through its fine-scale turbulence and moist thermodynamics, MES allows coarse grid cells to be partially cloudy and to encompass cloudy–clear air mixing on scales down to 1 m; in contrast, in typical LES such fine-scale processes are not represented or are parameterized using bulk deterministic closures. To illustrate MES and investigate its multiscale dynamics, a shallow cumulus cloud field is simulated. The fine-scale variability is seen to take a plausible form, with partially cloudy grid cells prominent near cloud edges and cloud top. From earlier theoretical work, this mixing of cloudy and clear air is believed to have an important impact on buoyancy. However, contrary to expectations based on earlier theoretical studies, the mean statistics of the bulk cloud field are essentially the same in MES and LES; possible reasons for this are discussed, including possible limitations in the present formulation of MES. One difference between LES and MES is seen in the coarse-scale turbulent kinetic energy, which appears to grow slowly in time due to incoherent stochastic fluctuations in the buoyancy. This and other considerations suggest the need for some type of spatial and/or temporal filtering to attenuate undersampling of the stochastic fine-scale processes

  11. Modeling the potential area of occupancy at fine resolution may reduce uncertainty in species range estimates

    DEFF Research Database (Denmark)

    Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo

    2012-01-01

    and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...... permit comparable measures among species. We conclude that estimates of AOO based on fine-resolution distribution models are more robust tools for risk assessment than traditional systems, allowing a better understanding of species ranges at habitat level....

  12. Simulation of water movement and isoproturon behaviour in a heavy clay soil using the MACRO model

    Directory of Open Access Journals (Sweden)

    T. J. Besien

    1997-01-01

    Full Text Available In this paper, the dual-porosity MACRO model has been used to investigate methods of reducing leaching of isoproturon from a structured heavy clay soil. The MACRO model was applied to a pesticide leaching data-set generated from a plot scale experiment on a heavy clay soil at the Oxford University Farm, Wytham, England. The field drain was found to be the most important outflow from the plot in terms of pesticide removal. Therefore, this modelling exercise concentrated on simulating field drain flow. With calibration of field-saturated and micropore saturated hydraulic conductivity, the drain flow hydrographs were simulated during extended periods of above average rainfall, with both the hydrograph shape and peak flows agreeing well. Over the whole field season, the observed drain flow water budget was well simulated. However, the first and second drain flow events after pesticide application were not simulated satisfactorily. This is believed to be due to a poor simulation of evapotranspiration during a period of low rainfall around the pesticide application day. Apart from an initial rapid drop in the observed isoproturon soil residue, the model simulated isoproturon residues during the 100 days after pesticide application reasonably well. Finally, the calibrated model was used to show that changes in agricultural practice (deep ploughing, creating fine consolidated seed beds and organic matter applications could potentially reduce pesticide leaching to surface waters by up to 60%.

  13. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  14. Model of the fine-grain component of martian soil based on Viking lander data

    International Nuclear Information System (INIS)

    Nussinov, M.D.; Chernyak, Y.B.; Ettinger, J.L.

    1978-01-01

    A model of the fine-grain component of the Martian soil is proposed. The model is based on well-known physical phenomena, and enables an explanation of the evolution of the gases released in the GEX (gas exchange experiments) and GCMS (gas chromatography-mass spectrometer experiments) of the Viking landers. (author)

  15. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  16. Revisiting fine-tuning in the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Graham G. [Oxford Univ. (United Kingdom). Rudolf Peierls Centre for Theoretical Physics; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Staub, Florian [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Inst. fuer Theoretische Physik; Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany). Inst. fuer Experimentelle Kernphysik

    2017-03-15

    We evaluate the amount of fine-tuning in constrained versions of the minimal supersymmetric standard model (MSSM), with different boundary conditions at the GUT scale. Specifically we study the fully constrained version as well as the cases of non-universal Higgs and gaugino masses. We allow for the presence of additional non-holomorphic soft-terms which we show further relax the fine-tuning. Of particular importance is the possibility of a Higgsino mass term and we discuss possible origins for such a term in UV complete models. We point out that loop corrections typically lead to a reduction in the fine-tuning by a factor of about two compared to the estimate at tree-level, which has been overlooked in many recent works. Taking these loop corrections into account, we discuss the impact of current limits from SUSY searches and dark matter on the fine-tuning. Contrary to common lore, we find that the MSSM fine-tuning can be as small as 10 while remaining consistent with all experimental constraints. If, in addition, the dark matter abundance is fully explained by the neutralino LSP, the fine-tuning can still be as low as ∝20 in the presence of additional non-holomorphic soft-terms. We also discuss future prospects of these models and find that the MSSM will remain natural even in the case of a non-discovery in the foreseeable future.

  17. Revisiting fine-tuning in the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Graham G. [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Schmidt-Hoberg, Kai [DESY, Notkestraße 85, D-22607 Hamburg (Germany); Staub, Florian [Institute for Theoretical Physics (ITP), Karlsruhe Institute of Technology, Engesserstraße 7, D-76128 Karlsruhe (Germany); Institute for Nuclear Physics (IKP), Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)

    2017-03-06

    We evaluate the amount of fine-tuning in constrained versions of the minimal supersymmetric standard model (MSSM), with different boundary conditions at the GUT scale. Specifically we study the fully constrained version as well as the cases of non-universal Higgs and gaugino masses. We allow for the presence of additional non-holomorphic soft-terms which we show further relax the fine-tuning. Of particular importance is the possibility of a Higgsino mass term and we discuss possible origins for such a term in UV complete models. We point out that loop corrections typically lead to a reduction in the fine-tuning by a factor of about two compared to the estimate at tree-level, which has been overlooked in many recent works. Taking these loop corrections into account, we discuss the impact of current limits from SUSY searches and dark matter on the fine-tuning. Contrary to common lore, we find that the MSSM fine-tuning can be as small as 10 while remaining consistent with all experimental constraints. If, in addition, the dark matter abundance is fully explained by the neutralino LSP, the fine-tuning can still be as low as ∼20 in the presence of additional non-holomorphic soft-terms. We also discuss future prospects of these models and find that the MSSM will remain natural even in the case of a non-discovery in the foreseeable future.

  18. Intelligent simulation of aquatic environment economic policy coupled ABM and SD models.

    Science.gov (United States)

    Wang, Huihui; Zhang, Jiarui; Zeng, Weihua

    2018-03-15

    Rapid urbanization and population growth have resulted in serious water shortage and pollution of the aquatic environment, which are important reasons for the complex increase in environmental deterioration in the region. This study examines the environmental consequences and economic impacts of water resource shortages under variant economic policies; however, this requires complex models that jointly consider variant agents and sectors within a systems perspective. Thus, we propose a complex system model that couples multi-agent based models (ABM) and system dynamics (SD) models to simulate the impact of alternative economic policies on water use and pricing. Moreover, this model took the constraint of the local water resources carrying capacity into consideration. Results show that to achieve the 13th Five Year Plan targets in Dianchi, water prices for local residents and industries should rise to 3.23 and 4.99 CNY/m 3 , respectively. The corresponding sewage treatment fees for residents and industries should rise to 1.50 and 2.25 CNY/m 3 , respectively, assuming comprehensive adjustment of industrial structure and policy. At the same time, the local government should exercise fine-scale economic policy combined with emission fees assessed for those exceeding a standard, and collect fines imposed as punishment for enterprises that exceed emission standards. When fines reach 500,000 CNY, the total number of enterprises that exceed emission standards in the basin can be controlled within 1%. Moreover, it is suggested that the volume of water diversion in Dianchi should be appropriately reduced to 3.06×10 8 m 3 . The reduced expense of water diversion should provide funds to use for the construction of recycled water facilities. Then the local rise in the rate of use of recycled water should reach 33%, and 1.4 CNY/m 3 for the price of recycled water could be provided to ensure the sustainable utilization of local water resources. Copyright © 2017 Elsevier B

  19. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  20. The Sun-Earth connect 2: Modelling patterns of a fractal Sun in time and space using the fine structure constant

    Science.gov (United States)

    Baker, Robert G. V.

    2017-02-01

    Self-similar matrices of the fine structure constant of solar electromagnetic force and its inverse, multiplied by the Carrington synodic rotation, have been previously shown to account for at least 98% of the top one hundred significant frequencies and periodicities observed in the ACRIM composite irradiance satellite measurement and the terrestrial 10.7cm Penticton Adjusted Daily Flux data sets. This self-similarity allows for the development of a time-space differential equation (DE) where the solutions define a solar model for transmissions through the core, radiative, tachocline, convective and coronal zones with some encouraging empirical and theoretical results. The DE assumes a fundamental complex oscillation in the solar core and that time at the tachocline is smeared with real and imaginary constructs. The resulting solutions simulate for tachocline transmission, the solar cycle where time-line trajectories either 'loop' as Hermite polynomials for an active Sun or 'tail' as complementary error functions for a passive Sun. Further, a mechanism that allows for the stable energy transmission through the tachocline is explored and the model predicts the initial exponential coronal heating from nanoflare supercharging. The twisting of the field at the tachocline is then described as a quaternion within which neutrinos can oscillate. The resulting fractal bubbles are simulated as a Julia Set which can then aggregate from nanoflares into solar flares and prominences. Empirical examples demonstrate that time and space fractals are important constructs in understanding the behaviour of the Sun, from the impact on climate and biological histories on Earth, to the fractal influence on the spatial distributions of the solar system. The research suggests that there is a fractal clock underpinning solar frequencies in packages defined by the fine structure constant, where magnetic flipping and irradiance fluctuations at phase changes, have periodically impacted on the

  1. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  2. Future changes in the climatology of the Great Plains low-level jet derived from fine resolution multi-model simulations.

    Science.gov (United States)

    Tang, Ying; Winkler, Julie; Zhong, Shiyuan; Bian, Xindi; Doubler, Dana; Yu, Lejiang; Walters, Claudia

    2017-07-10

    The southerly Great Plains low-level jet (GPLLJ) is one of the most significant circulation features of the central U.S. linking large-scale atmospheric circulation with the regional climate. GPLLJs transport heat and moisture, contribute to thunderstorm and severe weather formation, provide a corridor for the springtime migration of birds and insects, enhance wind energy availability, and disperse air pollution. We assess future changes in GPLLJ frequency using an eight member ensemble of dynamically-downscaled climate simulations for the mid-21st century. Nocturnal GPLLJ frequency is projected to increase in the southern plains in spring and in the central plains in summer, whereas current climatological patterns persist into the future for daytime and cool season GPLLJs. The relationship between future GPLLJ frequency and the extent and strength of anticyclonic airflow over eastern North America varies with season. Most simulations project a westward shift of anticyclonic airflow in summer, but uncertainty is larger for spring with only half of the simulations suggesting a westward expansion. The choice of regional climate model and the driving lateral boundary conditions have a large influence on the projected future changes in GPLLJ frequency and highlight the importance of multi-model ensembles to estimate the uncertainty surrounding the future GPLLJ climatology.

  3. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  4. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  5. CFD Simulations of Pb-Bi Two-Phase Flow

    International Nuclear Information System (INIS)

    Dostal, Vaclav; Zelezny, Vaclav; Zacha, Pavel

    2008-01-01

    In a Pb-Bi cooled direct contact steam generation fast reactor water is injected directly above the core, the produced steam is separated at the top and is send to the turbine. Neither the direct contact phenomenon nor the two-phase flow simulations in CFD have been thoroughly described yet. A first attempt in simulating such two-phase flow in 2D using the CFD code Fluent is presented in this paper. The volume of fluid explicit model was used. Other important simulation parameters were: pressure velocity relation PISO, discretization scheme body force weighted for pressure, second order upwind for momentum and CISCAM for void fraction. Boundary conditions were mass flow inlet (Pb-Bi 0 kg/s and steam 0.07 kg/s) and pressure outlet. The effect of mesh size (0.5 mm and 0.2 mm cells) was investigated as well as the effect of the turbulent model. It was found that using a fine mesh is very important in order to achieve larger bubbles and the turbulent model (k-ε realizable) is necessary to properly model the slug flow. The fine mesh and unsteady conditions resulted in computationally intense problem. This may pose difficulties in 3D simulations of the real experiments. (authors)

  6. Simulation of wind-induced snow transport in alpine terrain using a fully coupled snowpack/atmosphere model

    Science.gov (United States)

    Vionnet, V.; Martin, E.; Masson, V.; Guyomarc'h, G.; Naaim-Bouvet, F.; Prokop, A.; Durand, Y.; Lac, C.

    2013-06-01

    In alpine regions, wind-induced snow transport strongly influences the spatio-temporal evolution of the snow cover throughout the winter season. To gain understanding on the complex processes that drive the redistribution of snow, a new numerical model is developed. It couples directly the detailed snowpack model Crocus with the atmospheric model Meso-NH. Meso-NH/Crocus simulates snow transport in saltation and in turbulent suspension and includes the sublimation of suspended snow particles. A detailed representation of the first meters of the atmosphere allows a fine reproduction of the erosion and deposition process. The coupled model is evaluated against data collected around the experimental site of Col du Lac Blanc (2720 m a.s.l., French Alps). For this purpose, a blowing snow event without concurrent snowfall has been selected and simulated. Results show that the model captures the main structures of atmospheric flow in alpine terrain, the vertical profile of wind speed and the snow particles fluxes near the surface. However, the horizontal resolution of 50 m is found to be insufficient to simulate the location of areas of snow erosion and deposition observed by terrestrial laser scanning. When activated, the sublimation of suspended snow particles causes a reduction in deposition of 5.3%. Total sublimation (surface + blowing snow) is three times higher than surface sublimation in a simulation neglecting blowing snow sublimation.

  7. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  8. Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning

    Directory of Open Access Journals (Sweden)

    Eric G. Cavalcanti

    2018-04-01

    Full Text Available Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.

  9. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  10. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users.

    Science.gov (United States)

    Veksler, Vladislav D; Buchler, Norbou; Hoffman, Blaine E; Cassenti, Daniel N; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.

  11. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  12. Modelling Soil-Landscapes in Coastal California Hills Using Fine Scale Terrestrial Lidar

    Science.gov (United States)

    Prentice, S.; Bookhagen, B.; Kyriakidis, P. C.; Chadwick, O.

    2013-12-01

    Digital elevation models (DEMs) are the dominant input to spatially explicit digital soil mapping (DSM) efforts due to their increasing availability and the tight coupling between topography and soil variability. Accurate characterization of this coupling is dependent on DEM spatial resolution and soil sampling density, both of which may limit analyses. For example, DEM resolution may be too coarse to accurately reflect scale-dependent soil properties yet downscaling introduces artifactual uncertainty unrelated to deterministic or stochastic soil processes. We tackle these limitations through a DSM effort that couples moderately high density soil sampling with a very fine scale terrestrial lidar dataset (20 cm) implemented in a semiarid rolling hillslope domain where terrain variables change rapidly but smoothly over short distances. Our guiding hypothesis is that in this diffusion-dominated landscape, soil thickness is readily predicted by continuous terrain attributes coupled with catenary hillslope segmentation. We choose soil thickness as our keystone dependent variable for its geomorphic and hydrologic significance, and its tendency to be a primary input to synthetic ecosystem models. In defining catenary hillslope position we adapt a logical rule-set approach that parses common terrain derivatives of curvature and specific catchment area into discrete landform elements (LE). Variograms and curvature-area plots are used to distill domain-scale terrain thresholds from short range order noise characteristic of very fine-scale spatial data. The revealed spatial thresholds are used to condition LE rule-set inputs, rendering a catenary LE map that leverages the robustness of fine-scale terrain data to create a generalized interpretation of soil geomorphic domains. Preliminary regressions show that continuous terrain variables alone (curvature, specific catchment area) only partially explain soil thickness, and only in a subset of soils. For example, at spatial

  13. A numerical model for simulating electroosmotic micro- and nanochannel flows under non-Boltzmann equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyoungjin; Kwak, Ho Sang [School of Mechanical Engineering, Kumoh National Institute of Technology, 1 Yangho, Gumi, Gyeongbuk 730-701 (Korea, Republic of); Song, Tae-Ho, E-mail: kimkj@kumoh.ac.kr, E-mail: hskwak@kumoh.ac.kr, E-mail: thsong@kaist.ac.kr [Department of Mechanical, Aerospace and Systems Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong, Yuseong, Daejeon 305-701 (Korea, Republic of)

    2011-08-15

    This paper describes a numerical model for simulating electroosmotic flows (EOFs) under non-Boltzmann equilibrium in a micro- and nanochannel. The transport of ionic species is represented by employing the Nernst-Planck equation. Modeling issues related to numerical difficulties are discussed, which include the handling of boundary conditions based on surface charge density, the associated treatment of electric potential and the evasion of nonlinearity due to the electric body force. The EOF in the entrance region of a straight channel is examined. The numerical results show that the present model is useful for the prediction of the EOFs requiring a fine resolution of the electric double layer under either the Boltzmann equilibrium or non-equilibrium. Based on the numerical results, the correlation between the surface charge density and the zeta potential is investigated.

  14. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  15. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  16. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  17. Experimental research on accelerated consolidation using filter jackets in fine oil sands tailings

    Energy Technology Data Exchange (ETDEWEB)

    Tol van, F.; Yao, Y.; Paaseen van, L.; Everts, B. [Delft Univ. of Technology, Delft (Netherlands). Dept. of Geotechnology

    2010-07-01

    This PowerPoint presentation discussed prefabricated vertical drains used to enhance the dewatering of fine oil sand tailings. Filtration tests conducted with thickened tailings on standard PVD jackets were presented. Potential clogging mechanisms included clogging of the filter jacket by particles, blinding of the jackets by filter cake, the decreased permeability of consolidated tailings around the drain, and the clogging of the filter jacket with bitumen. Polypropylene and polyester geotextiles were tested in a set-up that replicated conditions observed at 5 to 10 meters below mud level in an oil sand tailings pond. A finite strain consolidation model was used to interpret results obtained in the experimental study. The relationship between the void ratio and hydraulic conductivity was investigated. Results of the study showed that neither the bitumen nor the fines in the sludge cause serious blinding of the filter jackets during the 40 day test period. The consolidation process was adequately simulated with the finite strain consolidation model. tabs., figs.

  18. Formation of fine sediment deposit from a flash flood river in the Mediterranean Sea

    Science.gov (United States)

    Grifoll, Manel; Gracia, Vicenç; Aretxabaleta, Alfredo L.; Guillén, Jorge; Espino, Manuel; Warner, John C.

    2014-01-01

    We identify the mechanisms controlling fine deposits on the inner-shelf in front of the Besòs River, in the northwestern Mediterranean Sea. This river is characterized by a flash flood regime discharging large amounts of water (more than 20 times the mean water discharge) and sediment in very short periods lasting from hours to few days. Numerical model output was compared with bottom sediment observations and used to characterize the multiple spatial and temporal scales involved in offshore sediment deposit formation. A high-resolution (50 m grid size) coupled hydrodynamic-wave-sediment transport model was applied to the initial stages of the sediment dispersal after a storm-related flood event. After the flood, sediment accumulation was predominantly confined to an area near the coastline as a result of preferential deposition during the final stage of the storm. Subsequent reworking occurred due to wave-induced bottom shear stress that resuspended fine materials, with seaward flow exporting them toward the midshelf. Wave characteristics, sediment availability, and shelf circulation determined the transport after the reworking and the final sediment deposition location. One year simulations of the regional area revealed a prevalent southwestward average flow with increased intensity downstream. The circulation pattern was consistent with the observed fine deposit depocenter being shifted southward from the river mouth. At the southern edge, bathymetry controlled the fine deposition by inducing near-bottom flow convergence enhancing bottom shear stress. According to the short-term and long-term analyses, a seasonal pattern in the fine deposit formation is expected.

  19. Measurement of the fine-structure constant as a test of the Standard Model

    Science.gov (United States)

    Parker, Richard H.; Yu, Chenghui; Zhong, Weicheng; Estey, Brian; Müller, Holger

    2018-04-01

    Measurements of the fine-structure constant α require methods from across subfields and are thus powerful tests of the consistency of theory and experiment in physics. Using the recoil frequency of cesium-133 atoms in a matter-wave interferometer, we recorded the most accurate measurement of the fine-structure constant to date: α = 1/137.035999046(27) at 2.0 × 10‑10 accuracy. Using multiphoton interactions (Bragg diffraction and Bloch oscillations), we demonstrate the largest phase (12 million radians) of any Ramsey-Bordé interferometer and control systematic effects at a level of 0.12 part per billion. Comparison with Penning trap measurements of the electron gyromagnetic anomaly ge ‑ 2 via the Standard Model of particle physics is now limited by the uncertainty in ge ‑ 2; a 2.5σ tension rejects dark photons as the reason for the unexplained part of the muon’s magnetic moment at a 99% confidence level. Implications for dark-sector candidates and electron substructure may be a sign of physics beyond the Standard Model that warrants further investigation.

  20. Two proofs of Fine's theorem

    International Nuclear Information System (INIS)

    Halliwell, J.J.

    2014-01-01

    Fine's theorem concerns the question of determining the conditions under which a certain set of probabilities for pairs of four bivalent quantities may be taken to be the marginals of an underlying probability distribution. The eight CHSH inequalities are well-known to be necessary conditions, but Fine's theorem is the striking result that they are also sufficient conditions. Here two transparent and self-contained proofs of Fine's theorem are presented. The first is a physically motivated proof using an explicit local hidden variables model. The second is an algebraic proof which uses a representation of the probabilities in terms of correlation functions. - Highlights: • A discussion of the various approaches to proving Fine's theorem. • A new physically-motivated proof using a local hidden variables model. • A new algebraic proof. • A new form of the CHSH inequalities

  1. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  2. Micro-mechanical investigation of the effect of fine content on mechanical behavior of gap graded granular materials using DEM

    Directory of Open Access Journals (Sweden)

    Taha Habib

    2017-01-01

    Full Text Available In this paper, we present a micro-mechanical study of the effect of fine content on the behavior of gap graded granular samples by using numerical simulations performed with the Discrete Element Method. Different samples with fine content varied from 0% to 30% are simulated. The role of fine content in reinforcing the granular skeleton and in supporting the external deviatoric stress is then brought into the light.

  3. Study on fine geological modelling of the fluvial sandstone reservoir in Daqing oilfield

    Energy Technology Data Exchange (ETDEWEB)

    Zhoa Han-Qing [Daqing Research Institute, Helongjiang (China)

    1997-08-01

    These paper aims at developing a method for fine reservoir description in maturing oilfields by using close spaced well logging data. The main productive reservoirs in Daqing oilfield is a set of large fluvial-deltaic deposits in the Songliao Lake Basin, characterized by multi-layers and serious heterogeneities. Various fluvial channel sandstone reservoirs cover a fairly important proportion of reserves. After a long period of water flooding, most of them have turned into high water cut layers, but there are considerable residual reserves within them, which are difficult to find and tap. Making fine reservoir description and developing sound a geological model is essential for tapping residual oil and enhancing oil recovery. The principal reason for relative lower precision of predicting model developed by using geostatistics is incomplete recognition of complex distribution of fluvial reservoirs and their internal architecture`s. Tasking advantage of limited outcrop data from other regions (suppose no outcrop data available in oilfield) can only provide the knowledge of subtle changing of reservoir parameters and internal architecture. For the specific geometry distribution and internal architecture of subsurface reservoirs (such as in produced regions) can be gained only from continuous infilling logging well data available from studied areas. For developing a geological model, we think the first important thing is to characterize sandbodies geometries and their general architecture`s, which are the framework of models, and then the slight changing of interwell parameters and internal architecture`s, which are the contents and cells of the model. An excellent model should possess both of them, but the geometry is the key to model, because it controls the contents and cells distribution within a model.

  4. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. Simulating Interface Growth and Defect Generation in CZT – Simulation State of the Art and Known Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Henager, Charles H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Fei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lin, Guang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bylaska, Eric J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zabaras, Nicholas [Cornell Univ., Ithaca, NY (United States)

    2012-11-01

    This one-year, study topic project will survey and investigate the known state-of-the-art of modeling and simulation methods suitable for performing fine-scale, fully 3-D modeling, of the growth of CZT crystals at the melt-solid interface, and correlating physical growth and post-growth conditions with generation and incorporation of defects into the solid CZT crystal. In the course of this study, this project will also identify the critical gaps in our knowledge of modeling and simulation techniques in terms of what would be needed to be developed in order to perform accurate physical simulations of defect generation in melt-grown CZT. The transformational nature of this study will be, for the first time, an investigation of modeling and simulation methods for describing microstructural evolution during crystal growth and the identification of the critical gaps in our knowledge of such methods, which is recognized as having tremendous scientific impacts for future model developments in a wide variety of materials science areas.

  6. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users

    Directory of Open Access Journals (Sweden)

    Vladislav D. Veksler

    2018-05-01

    Full Text Available Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior via techniques such as model tracing and dynamic parameter fitting.

  7. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users

    Science.gov (United States)

    Veksler, Vladislav D.; Buchler, Norbou; Hoffman, Blaine E.; Cassenti, Daniel N.; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting. PMID:29867661

  8. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  9. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  10. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  11. Simulating root carbon storage with a coupled carbon — Water cycle root model

    Science.gov (United States)

    Kleidon, A.; Heimann, M.

    1996-12-01

    Is it possible to estimate carbon allocation to fine roots from the water demands of the vegetation? We assess this question by applying a root model which is based on optimisation principles. The model uses a new formulation of water uptake by fine roots, which is necessary to explicitly take into account the highly dynamic and non-steady process of water uptake. Its carbon dynamics are driven by maximising the water uptake while keeping maintenance costs at a minimum. We apply the model to a site in northern Germany and check averaged vertical fine root biomass distribution against measured data. The model reproduces the observed values fairly well and the approach seems promising. However, more validation is necessary, especially on the predicted dynamics of the root biomass.

  12. Fine chemistry

    International Nuclear Information System (INIS)

    Laszlo, P.

    1988-01-01

    The 1988 progress report of the Fine Chemistry laboratory (Polytechnic School, France) is presented. The research programs are centered on the renewal of the organic chemistry most important reactions and on the invention of new, highly efficient and highly selective reactions, by applying low cost reagents and solvents. An important research domain concerns the study and fabrication of new catalysts. They are obtained by means of the reactive sputtering of the metals and metal oxydes thin films. The Monte Carlo simulations of the long-range electrostatic interaction in a clay and the obtention of acrylamides from anhydrous or acrylic ester are summarized. Moreover, the results obtained in the field of catalysis are also given. The published papers and the congress communications are included [fr

  13. The utility of satellite observations for constraining fine-scale and transient methane sources

    Science.gov (United States)

    Turner, A. J.; Jacob, D.; Benmergui, J. S.; Brandman, J.; White, L.; Randles, C. A.

    2017-12-01

    Resolving differences between top-down and bottom-up emissions of methane from the oil and gas industry is difficult due, in part, to their fine-scale and often transient nature. There is considerable interest in using atmospheric observations to detect these sources. Satellite-based instruments are an attractive tool for this purpose and, more generally, for quantifying methane emissions on fine scales. A number of instruments are planned for launch in the coming years from both low earth and geostationary orbit, but the extent to which they can provide fine-scale information on sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) exploring the tradeoffs between pixel resolution, measurement frequency, and instrument precision on the fine-scale information content of a space-borne instrument measuring methane. We use the WRF-STILT Lagrangian transport model to generate more than 200,000 column footprints at 1.3×1.3 km2 spatial resolution and hourly temporal resolution over the Barnett Shale in Texas. We sub-sample these footprints to match the observing characteristics of the planned TROPOMI and GeoCARB instruments as well as different hypothetical observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its singular values. We draw conclusions on the capabilities of the planned satellite instruments and how these capabilities could be improved for fine-scale source detection.

  14. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  15. Numerical simulations of capillary barrier field tests

    International Nuclear Information System (INIS)

    Morris, C.E.; Stormont, J.C.

    1997-01-01

    Numerical simulations of two capillary barrier systems tested in the field were conducted to determine if an unsaturated flow model could accurately represent the observed results. The field data was collected from two 7-m long, 1.2-m thick capillary barriers built on a 10% grade that were being tested to investigate their ability to laterally divert water downslope. One system had a homogeneous fine layer, while the fine soil of the second barrier was layered to increase its ability to laterally divert infiltrating moisture. The barriers were subjected first to constant infiltration while minimizing evaporative losses and then were exposed to ambient conditions. The continuous infiltration period of the field tests for the two barrier systems was modelled to determine the ability of an existing code to accurately represent capillary barrier behavior embodied in these two designs. Differences between the field test and the model data were found, but in general the simulations appeared to adequately reproduce the response of the test systems. Accounting for moisture retention hysteresis in the layered system will potentially lead to more accurate modelling results and is likely to be important when developing reasonable predictions of capillary barrier behavior

  16. Technical fine-tuning problem in renormalized perturbation theory

    International Nuclear Information System (INIS)

    Foda, O.E.

    1983-01-01

    The technical - as opposed to physical - fine tuning problem, i.e. the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a number of different models is studied. These include softly-broken supersymmetric models, and non-supersymmetric ones with a hierarchy of spontaneously-broken gauge symmetries. The models are renormalized using the BPHZ prescription, with momentum subtractions. Explicit calculations indicate that the tree-level hierarchy is not upset by the radiative corrections, and consequently no further fine-tuning is required to maintain it. Furthermore, this result is shown to run counter to that obtained via Dimensional Renormalization, (the only scheme used in previous literature on the subject). The discrepancy originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Within fully-renormalized perturbation theory the answer to the technical fine-tuning question (in the sense of whether the radiative corrections will ''readily'' respect the tree level gauge hierarchy or not) is contingent on the renormalization scheme used to define the model at the quantum level, rather than on the model itself. In other words, the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes

  17. Technical fine-tuning problem in renormalized perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.E.

    1983-01-01

    The technical - as opposed to physical - fine tuning problem, i.e. the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a number of different models is studied. These include softly-broken supersymmetric models, and non-supersymmetric ones with a hierarchy of spontaneously-broken gauge symmetries. The models are renormalized using the BPHZ prescription, with momentum subtractions. Explicit calculations indicate that the tree-level hierarchy is not upset by the radiative corrections, and consequently no further fine-tuning is required to maintain it. Furthermore, this result is shown to run counter to that obtained via Dimensional Renormalization, (the only scheme used in previous literature on the subject). The discrepancy originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Within fully-renormalized perturbation theory the answer to the technical fine-tuning question (in the sense of whether the radiative corrections will ''readily'' respect the tree level gauge hierarchy or not) is contingent on the renormalization scheme used to define the model at the quantum level, rather than on the model itself. In other words, the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes.

  18. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  19. Deep Learning versus Professional Healthcare Equipment: A Fine-Grained Breathing Rate Monitoring Model

    Directory of Open Access Journals (Sweden)

    Bang Liu

    2018-01-01

    Full Text Available In mHealth field, accurate breathing rate monitoring technique has benefited a broad array of healthcare-related applications. Many approaches try to use smartphone or wearable device with fine-grained monitoring algorithm to accomplish the task, which can only be done by professional medical equipment before. However, such schemes usually result in bad performance in comparison to professional medical equipment. In this paper, we propose DeepFilter, a deep learning-based fine-grained breathing rate monitoring algorithm that works on smartphone and achieves professional-level accuracy. DeepFilter is a bidirectional recurrent neural network (RNN stacked with convolutional layers and speeded up by batch normalization. Moreover, we collect 16.17 GB breathing sound recording data of 248 hours from 109 and another 10 volunteers to train and test our model, respectively. The results show a reasonably good accuracy of breathing rate monitoring.

  20. Design and characterization of a cough simulator.

    Science.gov (United States)

    Zhang, Bo; Zhu, Chao; Ji, Zhiming; Lin, Chao-Hsin

    2017-02-23

    Expiratory droplets from human coughing have always been considered as potential carriers of pathogens, responsible for respiratory infectious disease transmission. To study the transmission of disease by human coughing, a transient repeatable cough simulator has been designed and built. Cough droplets are generated by different mechanisms, such as the breaking of mucus, condensation and high-speed atomization from different depths of the respiratory tract. These mechanisms in coughing produce droplets of different sizes, represented by a bimodal distribution of 'fine' and 'coarse' droplets. A cough simulator is hence designed to generate transient sprays with such bimodal characteristics. It consists of a pressurized gas tank, a nebulizer and an ejector, connected in series, which are controlled by computerized solenoid valves. The bimodal droplet size distribution is characterized for the coarse droplets and fine droplets, by fibrous collection and laser diffraction, respectively. The measured size distributions of coarse and fine droplets are reasonably represented by the Rosin-Rammler and log-normal distributions in probability density function, which leads to a bimodal distribution. To assess the hydrodynamic consequences of coughing including droplet vaporization and polydispersion, a Lagrangian model of droplet trajectories is established, with its ambient flow field predetermined from a computational fluid dynamics simulation.

  1. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  2. Numerical modeling of drying and consolidation of fine sediments and tailings

    NARCIS (Netherlands)

    Van der Meulen, J.; Van Tol, A.F.; Van Paassen, L.A.; Heimovaara, T.J.

    2012-01-01

    The extraction and processing of many mineral ores result in the generation of large volumes of fine-grained residue or tailings. These fine sediments are deposited as a slurry with very high water contents and lose water after deposition due to self-weight consolidation. When the surface is exposed

  3. Antitrust Enforcement Under Endogenous Fines and Price-Dependent Detection Probabilities

    NARCIS (Netherlands)

    Houba, H.E.D.; Motchenkova, E.; Wen, Q.

    2010-01-01

    We analyze the effectiveness of antitrust regulation in a repeated oligopoly model in which both fines and detection probabilities depend on the cartel price. Such fines are closer to actual guidelines than the commonly assumed fixed fines. Under a constant detection probability, we confirm the

  4. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  5. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  6. One-loop analysis of the electroweak breaking in supersymmetric models and the fine-tuning problem

    CERN Document Server

    De Carlos, B

    1993-01-01

    We examine the electroweak breaking mechanism in the minimal supersymmetric standard model (MSSM) using the {\\em complete} one-loop effective potential $V_1$. First, we study what is the region of the whole MSSM parameter space (i.e. $M_{1/2},m_o,\\mu,...$) that leads to a succesful $SU(2)\\times U(1)$ breaking with an acceptable top quark mass. In doing this it is observed that all the one-loop corrections to $V_1$ (even the apparently small ones) must be taken into account in order to get reliable results. We find that the allowed region of parameters is considerably enhanced with respect to former "improved" tree level results. Next, we study the fine-tuning problem associated with the high sensitivity of $M_Z$ to $h_t$ (the top Yukawa coupling). Again, we find that this fine-tuning is appreciably smaller once the one-loop effects are considered than in previous tree level calculations. Finally, we explore the ambiguities and limitations of the ordinary criterion to estimate the degree of fine-tuning. As a r...

  7. Impact of biogenic emission uncertainties on the simulated response of ozone and fine particulate matter to anthropogenic emission reductions.

    Science.gov (United States)

    Hogrefe, Christian; Isukapalli, Sastry S; Tang, Xiaogang; Georgopoulos, Panos G; He, Shan; Zalewsky, Eric E; Hao, Winston; Ku, Jia-Yeong; Key, Tonalee; Sistla, Gopal

    2011-01-01

    The role of emissions of volatile organic compounds and nitric oxide from biogenic sources is becoming increasingly important in regulatory air quality modeling as levels of anthropogenic emissions continue to decrease and stricter health-based air quality standards are being adopted. However, considerable uncertainties still exist in the current estimation methodologies for biogenic emissions. The impact of these uncertainties on ozone and fine particulate matter (PM2.5) levels for the eastern United States was studied, focusing on biogenic emissions estimates from two commonly used biogenic emission models, the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the Biogenic Emissions Inventory System (BEIS). Photochemical grid modeling simulations were performed for two scenarios: one reflecting present day conditions and the other reflecting a hypothetical future year with reductions in emissions of anthropogenic oxides of nitrogen (NOx). For ozone, the use of MEGAN emissions resulted in a higher ozone response to hypothetical anthropogenic NOx emission reductions compared with BEIS. Applying the current U.S. Environmental Protection Agency guidance on regulatory air quality modeling in conjunction with typical maximum ozone concentrations, the differences in estimated future year ozone design values (DVF) stemming from differences in biogenic emissions estimates were on the order of 4 parts per billion (ppb), corresponding to approximately 5% of the daily maximum 8-hr ozone National Ambient Air Quality Standard (NAAQS) of 75 ppb. For PM2.5, the differences were 0.1-0.25 microg/m3 in the summer total organic mass component of DVFs, corresponding to approximately 1-2% of the value of the annual PM2.5 NAAQS of 15 microg/m3. Spatial variations in the ozone and PM2.5 differences also reveal that the impacts of different biogenic emission estimates on ozone and PM2.5 levels are dependent on ambient levels of anthropogenic emissions.

  8. Mathematical Model of Transfer and Deposition of Finely Dispersed Particles in a Turbulent Flow of Emulsions and Suspensions

    Science.gov (United States)

    Laptev, A. G.; Basharov, M. M.

    2018-05-01

    The problem of modeling turbulent transfer of finely dispersed particles in liquids has been considered. An approach is used where the transport of particles is represented in the form of a variety of the diffusion process with the coefficient of turbulent transfer to the wall. Differential equations of transfer are written for different cases, and a solution of the cell model is obtained for calculating the efficiency of separation in a channel. Based on the theory of turbulent transfer of particles and of the boundary layer model, an expression has been obtained for calculating the rate of turbulent deposition of finely dispersed particles. The application of this expression in determining the efficiency of physical coagulation of emulsions in different channels and on the surface of chaotic packings is shown.

  9. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  10. Coupling fine particle and bedload transport in gravel-bedded streams

    Science.gov (United States)

    Park, Jungsu; Hunt, James R.

    2017-09-01

    Fine particles in the silt- and clay-size range are important determinants of surface water quality. Since fine particle loading rates are not unique functions of stream discharge this limits the utility of the available models for water quality assessment. Data from 38 minimally developed watersheds within the United States Geological Survey stream gauging network in California, USA reveal three lines of evidence that fine particle release is coupled with bedload transport. First, there is a transition in fine particle loading rate as a function of discharge for gravel-bedded sediments that does not appear when the sediment bed is composed of sand, cobbles, boulders, or bedrock. Second, the discharge at the transition in the loading rate is correlated with the initiation of gravel mobilization. Third, high frequency particle concentration and discharge data are dominated by clockwise hysteresis where rising limb discharges generally have higher concentrations than falling limb discharges. These three observations across multiple watersheds lead to a conceptual model that fine particles accumulate within the sediment bed at discharges less than the transition and then the gravel bed fluidizes with fine particle release at discharges above the transition discharge. While these observations were individually recognized in the literature, this analysis provides a consistent conceptual model based on the coupling of fine particle dynamics with filtration at low discharges and gravel bed fluidization at higher discharges.

  11. Large, high-intensity fire events in Southern California shrublands: Debunking the fine-grain age patch model

    Science.gov (United States)

    Keeley, J.E.; Zedler, P.H.

    2009-01-01

    We evaluate the fine-grain age patch model of fire regimes in southern California shrublands. Proponents contend that the historical condition was characterized by frequent small to moderate size, slow-moving smoldering fires, and that this regime has been disrupted by fire suppression activities that have caused unnatural fuel accumulation and anomalously large and catastrophic wildfires. A review of more than 100 19th-century newspaper reports reveals that large, high-intensity wildfires predate modern fire suppression policy, and extensive newspaper coverage plus first-hand accounts support the conclusion that the 1889 Santiago Canyon Fire was the largest fire in California history. Proponents of the fine-grain age patch model contend that even the very earliest 20th-century fires were the result of fire suppression disrupting natural fuel structure. We tested that hypothesis and found that, within the fire perimeters of two of the largest early fire events in 1919 and 1932, prior fire suppression activities were insufficient to have altered the natural fuel structure. Over the last 130 years there has been no significant change in the incidence of large fires greater than 10000 ha, consistent with the conclusion that fire suppression activities are not the cause of these fire events. Eight megafires (???50 000 ha) are recorded for the region, and half have occurred in the last five years. These burned through a mosaic of age classes, which raises doubts that accumulation of old age classes explains these events. Extreme drought is a plausible explanation for this recent rash of such events, and it is hypothesized that these are due to droughts that led to increased dead fine fuels that promoted the incidence of firebrands and spot fires. A major shortcoming of the fine-grain age patch model is that it requires age-dependent flammability of shrubland fuels, but seral stage chaparral is dominated by short-lived species that create a dense surface layer of fine

  12. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  13. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Directory of Open Access Journals (Sweden)

    A.-S. Høyer

    2017-12-01

    Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical

  14. Simulation of a Large Wildfire in a Coupled Fire-Atmosphere Model

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Filippi

    2018-06-01

    Full Text Available The Aullene fire devastated more than 3000 ha of Mediterranean maquis and pine forest in July 2009. The simulation of combustion processes, as well as atmospheric dynamics represents a challenge for such scenarios because of the various involved scales, from the scale of the individual flames to the larger regional scale. A coupled approach between the Meso-NH (Meso-scale Non-Hydrostatic atmospheric model running in LES (Large Eddy Simulation mode and the ForeFire fire spread model is proposed for predicting fine- to large-scale effects of this extreme wildfire, showing that such simulation is possible in a reasonable time using current supercomputers. The coupling involves the surface wind to drive the fire, while heat from combustion and water vapor fluxes are injected into the atmosphere at each atmospheric time step. To be representative of the phenomenon, a sub-meter resolution was used for the simulation of the fire front, while atmospheric simulations were performed with nested grids from 2400-m to 50-m resolution. Simulations were run with or without feedback from the fire to the atmospheric model, or without coupling from the atmosphere to the fire. In the two-way mode, the burnt area was reproduced with a good degree of realism at the local scale, where an acceleration in the valley wind and over sloping terrain pushed the fire line to locations in accordance with fire passing point observations. At the regional scale, the simulated fire plume compares well with the satellite image. The study explores the strong fire-atmosphere interactions leading to intense convective updrafts extending above the boundary layer, significant downdrafts behind the fire line in the upper plume, and horizontal wind speeds feeding strong inflow into the base of the convective updrafts. The fire-induced dynamics is induced by strong near-surface sensible heat fluxes reaching maximum values of 240 kW m − 2 . The dynamical production of turbulent kinetic

  15. Fine Guidance Sensing for Coronagraphic Observatories

    Science.gov (United States)

    Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.

    2011-01-01

    Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.

  16. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    Science.gov (United States)

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-01

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO2, CH4) exchanges with the atmosphere range from the molecular scale (pore-scale O2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" that reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface-subsurface isothermal simulations were performed for summer months (June-September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998-2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 103) with very small relative approximation error (training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with

  17. Soft sensor for real-time cement fineness estimation.

    Science.gov (United States)

    Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir

    2015-03-01

    This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  19. On the Accelerated Settling of Fine Particles in a Bidisperse Slurry

    Directory of Open Access Journals (Sweden)

    Leonid L. Minkov

    2015-01-01

    Full Text Available An estimation of increasing the volume average sedimentation velocity of fine particles in bidisperse suspension due to their capturing in the circulation zone formed in the laminar flow of incompressible viscous fluid around the spherical coarse particle is proposed. The estimation is important for an explanation of the nonmonotonic shape of the separation curve observed for hydrocyclones. The volume average sedimentation velocity is evaluated on the basis of a cellular model. The characteristic dimensions of the circulation zone are obtained on the basis of a numerical solution of Navier-Stokes equations. Furthermore, these calculations are used for modelling the fast sedimentation of fine particles during their cosedimentation in bidisperse suspension. It was found that the acceleration of sedimentation of fine particles is determined by the concentration of coarse particles in bidisperse suspension, and the sedimentation velocity of fine fraction is proportional to the square of the coarse and fine particle diameter ratio. The limitations of the proposed model are ascertained.

  20. Simulating Pacific Northwest Forest Response to Climate Change: How We Made Model Results Useful for Vulnerability Assessments

    Science.gov (United States)

    Kim, J. B.; Kerns, B. K.; Halofsky, J.

    2014-12-01

    GCM-based climate projections and downscaled climate data proliferate, and there are many climate-aware vegetation models in use by researchers. Yet application of fine-scale DGVM based simulation output in national forest vulnerability assessments is not common, because there are technical, administrative and social barriers for their use by managers and policy makers. As part of a science-management climate change adaptation partnership, we performed simulations of vegetation response to climate change for four national forests in the Blue Mountains of Oregon using the MC2 dynamic global vegetation model (DGVM) for use in vulnerability assessments. Our simulation results under business-as-usual scenarios suggest a starkly different future forest conditions for three out of the four national forests in the study area, making their adoption by forest managers a potential challenge. However, using DGVM output to structure discussion of potential vegetation changes provides a suitable framework to discuss the dynamic nature of vegetation change compared to using more commonly available model output (e.g. species distribution models). From the onset, we planned and coordinated our work with national forest managers to maximize the utility and the consideration of the simulation results in planning. Key lessons from this collaboration were: (1) structured and strategic selection of a small number climate change scenarios that capture the range of variability in future conditions simplified results; (2) collecting and integrating data from managers for use in simulations increased support and interest in applying output; (3) a structured, regionally focused, and hierarchical calibration of the DGVM produced well-validated results; (4) simple approaches to quantifying uncertainty in simulation results facilitated communication; and (5) interpretation of model results in a holistic context in relation to multiple lines of evidence produced balanced guidance. This latest

  1. Cosmological simulations of multicomponent cold dark matter.

    Science.gov (United States)

    Medvedev, Mikhail V

    2014-08-15

    The nature of dark matter is unknown. A number of dark matter candidates are quantum flavor-mixed particles but this property has never been accounted for in cosmology. Here we explore this possibility from the first principles via extensive N-body cosmological simulations and demonstrate that the two-component dark matter model agrees with observational data at all scales. Substantial reduction of substructure and flattening of density profiles in the centers of dark matter halos found in simulations can simultaneously resolve several outstanding puzzles of modern cosmology. The model shares the "why now?" fine-tuning caveat pertinent to all self-interacting models. Predictions for direct and indirect detection dark matter experiments are made.

  2. Improving the utility of the fine motor skills subscale of the comprehensive developmental inventory for infants and toddlers: a computerized adaptive test.

    Science.gov (United States)

    Huang, Chien-Yu; Tung, Li-Chen; Chou, Yeh-Tai; Chou, Willy; Chen, Kuan-Lin; Hsieh, Ching-Lin

    2017-07-27

    This study aimed at improving the utility of the fine motor subscale of the comprehensive developmental inventory for infants and toddlers (CDIIT) by developing a computerized adaptive test of fine motor skills. We built an item bank for the computerized adaptive test of fine motor skills using the fine motor subscale of the CDIIT items fitting the Rasch model. We also examined the psychometric properties and efficiency of the computerized adaptive test of fine motor skills with simulated computerized adaptive tests. Data from 1742 children with suspected developmental delays were retrieved. The mean scores of the fine motor subscale of the CDIIT increased along with age groups (mean scores = 1.36-36.97). The computerized adaptive test of fine motor skills contains 31 items meeting the Rasch model's assumptions (infit mean square = 0.57-1.21, outfit mean square = 0.11-1.17). For children of 6-71 months, the computerized adaptive test of fine motor skills had high Rasch person reliability (average reliability >0.90), high concurrent validity (rs = 0.67-0.99), adequate to excellent diagnostic accuracy (area under receiver operating characteristic = 0.71-1.00), and large responsiveness (effect size = 1.05-3.93). The computerized adaptive test of fine motor skills used 48-84% fewer items than the fine motor subscale of the CDIIT. The computerized adaptive test of fine motor skills used fewer items for assessment but was as reliable and valid as the fine motor subscale of the CDIIT. Implications for Rehabilitation We developed a computerized adaptive test based on the comprehensive developmental inventory for infants and toddlers (CDIIT) for assessing fine motor skills. The computerized adaptive test has been shown to be efficient because it uses fewer items than the original measure and automatically presents the results right after the test is completed. The computerized adaptive test is as reliable and valid as the CDIIT.

  3. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  4. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    Energy Technology Data Exchange (ETDEWEB)

    Di Luca, Alejandro; Laprise, Rene [Universite du Quebec a Montreal (UQAM), Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Departement des Sciences de la Terre et de l' Atmosphere, PK-6530, Succ. Centre-ville, B.P. 8888, Montreal, QC (Canada); De Elia, Ramon [Universite du Quebec a Montreal, Ouranos Consortium, Centre ESCER (Etude et Simulation du Climat a l' Echelle Regionale), Montreal (Canada)

    2012-03-15

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions. (orig.)

  5. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  6. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, M. A.; Vivanco, M. G.

    2015-07-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and Gonzalez et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  7. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, M.A.; Vivanco, M.

    2015-07-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and González et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  8. Modelling the fine and coarse fraction of Pb, Cd, As and Ni air concentration in Spain

    International Nuclear Information System (INIS)

    Gonzalez, M. A.; Vivanco, M. G.

    2015-01-01

    Lead, cadmium, arsenic and nickel are present in the air due to natural and anthropogenic emissions, normally joined to particles. Human health and ecosystems can be damaged by high atmospheric levels of these metals, since they can be introduced in organisms via inhalation or ingestion. Small particles are inhaled and embebed in lungs and alveolus more easily than coarse particles. The CHIMERE model is a eulerian air quality model extensively used in air quality modelling. Metals have been recently included in this model in a special version developed in the CIEMAT modelling group (Madrid, Spain). Vivanco et al. (2011) and Gonzalez et al. (2012) showed an evaluation of the model performance for some metals in Spain and Europe. In these studies, metals were considered as fine particles. Nevertheless there is some observational evidence of the presence of some metals also in the coarse fraction. For this reason, a new attempt of modelling metals considering a fine (<2.5 μm) and coarse (2.5-10 μm) fraction has been done. Measurements of metal concentration in PM10, PM2.5 and PM1 recorded in Spain were used to obtain the new metal particle distribution size. On the other hand, natural emissions, not considered in the above mentioned studies, were implemented in the model, by considering metal emissions associated to dust resuspensiont. An evaluation of the new version is presented and discussed for two domains in Spain, centered on Barcelona and Huelva respectively. (Author)

  9. High Fidelity BWR Fuel Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Su Jong [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fraction and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.

  10. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  11. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  12. Errors and uncertainties introduced by a regional climate model in climate impact assessments: example of crop yield simulations in West Africa

    International Nuclear Information System (INIS)

    Ramarohetra, Johanna; Pohl, Benjamin; Sultan, Benjamin

    2015-01-01

    The challenge of estimating the potential impacts of climate change has led to an increasing use of dynamical downscaling to produce fine spatial-scale climate projections for impact assessments. In this work, we analyze if and to what extent the bias in the simulated crop yield can be reduced by using the Weather Research and Forecasting (WRF) regional climate model to downscale ERA-Interim (European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis) rainfall and radiation data. Then, we evaluate the uncertainties resulting from both the choice of the physical parameterizations of the WRF model and its internal variability. Impact assessments were performed at two sites in Sub-Saharan Africa and by using two crop models to simulate Niger pearl millet and Benin maize yields. We find that the use of the WRF model to downscale ERA-Interim climate data generally reduces the bias in the simulated crop yield, yet this reduction in bias strongly depends on the choices in the model setup. Among the physical parameterizations considered, we show that the choice of the land surface model (LSM) is of primary importance. When there is no coupling with a LSM, or when the LSM is too simplistic, the simulated precipitation and then the simulated yield are null, or respectively very low; therefore, coupling with a LSM is necessary. The convective scheme is the second most influential scheme for yield simulation, followed by the shortwave radiation scheme. The uncertainties related to the internal variability of the WRF model are also significant and reach up to 30% of the simulated yields. These results suggest that regional models need to be used more carefully in order to improve the reliability of impact assessments. (letter)

  13. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Marjanovic, Nikola [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA; Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Mirocha, Jeffrey D. [Atmospheric, Earth and Energy Division, Lawrence Livermore National Laboratory, PO Box 808, L-103, Livermore, California 94551, USA; Kosović, Branko [Research Applications Laboratory, Weather Systems and Assessment Program, University Corporation for Atmospheric Research, PO Box 3000, Boulder, Colorado 80307, USA; Lundquist, Julie K. [Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Campus Box 311, Boulder, Colorado 80309, USA; National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, Colorado 80401, USA; Chow, Fotini Katopodes [Department of Civil and Environmental Engineering, University of California, Berkeley, MC 1710, Berkeley, California 94720-1710, USA

    2017-11-01

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulations show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.

  14. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  15. Simulation of Venus polar vortices with the non-hydrostatic general circulation model

    Science.gov (United States)

    Rodin, Alexander V.; Mingalev, Oleg; Orlov, Konstantin

    2012-07-01

    The dynamics of Venus atmosphere in the polar regions presents a challenge for general circulation models. Numerous images and hyperspectral data from Venus Express mission shows that above 60 degrees latitude atmospheric motion is substantially different from that of the tropical and extratropical atmosphere. In particular, extended polar hoods composed presumably of fine haze particles, as well as polar vortices revealing mesoscale wave perturbations with variable zonal wavenumbers, imply the significance of vertical motion in these circulation elements. On these scales, however, hydrostatic balance commonly used in the general circulation models is no longer valid, and vertical forces have to be taken into account to obtain correct wind field. We present the first non-hydrostatic general circulation model of the Venus atmosphere based on the full set of gas dynamics equations. The model uses uniform grid with the resolution of 1.2 degrees in horizontal and 200 m in the vertical direction. Thermal forcing is simulated by means of relaxation approximation with specified thermal profile and time scale. The model takes advantage of hybrid calculations on graphical processors using CUDA technology in order to increase performance. Simulations show that vorticity is concentrated at high latitudes within planetary scale, off-axis vortices, precessing with a period of 30 to 40 days. The scale and position of these vortices coincides with polar hoods observed in the UV images. The regions characterized with high vorticity are surrounded by series of small vortices which may be caused by shear instability of the zonal flow. Vertical velocity component implies that in the central part of high vorticity areas atmospheric flow is downwelling and perturbed by mesoscale waves with zonal wavenumbers 1-4, resembling observed wave structures in the polar vortices. Simulations also show the existence of areas with strong vertical flow, concentrated in spiral branches extending

  16. Reducing the fine-tuning of gauge-mediated SUSY breaking

    Energy Technology Data Exchange (ETDEWEB)

    Casas, J.A.; Moreno, Jesus M. [Universidad Autonoma de Madrid, Instituto de Fisica Teorica, IFT-UAM/CSIC, Madrid (Spain); Robles, Sandra [Universidad Autonoma de Madrid, Instituto de Fisica Teorica, IFT-UAM/CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Rolbiecki, Krzysztof [Universidad Autonoma de Madrid, Instituto de Fisica Teorica, IFT-UAM/CSIC, Madrid (Spain); University of Warsaw, Faculty of Physics, Warsaw (Poland)

    2016-08-15

    Despite their appealing features, models with gauge-mediated supersymmetry breaking (GMSB) typically present a high degree of fine-tuning, due to the initial absence of the top trilinear scalar couplings, A{sub t} = 0. In this paper, we carefully evaluate such a tuning, showing that is worse than per mil in the minimal model. Then, we examine some existing proposals to generate A{sub t} ≠ 0 term in this context. We find that, although the stops can be made lighter, usually the tuning does not improve (it may be even worse), with some exceptions, which involve the generation of A{sub t} at one loop or tree level. We examine both possibilities and propose a conceptually simplified version of the latter; which is arguably the optimum GMSB setup (with minimal matter content), concerning the fine-tuning issue. The resulting fine-tuning is better than one per mil, still severe but similar to other minimal supersymmetric standard model constructions. We also explore the so-called ''little A{sub t}{sup 2}/m{sup 2} problem'', i.e. the fact that a large A{sub t}-term is normally accompanied by a similar or larger sfermion mass, which typically implies an increase in the fine-tuning. Finally, we find the version of GMSB for which this ratio is optimized, which, nevertheless, does not minimize the fine-tuning. (orig.)

  17. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  18. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  19. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  20. Simulation of hydrogen mitigation in catalytic recombiner. Part-II: Formulation of a CFD model

    International Nuclear Information System (INIS)

    Prabhudharwadkar, Deoras M.; Iyer, Kannan N.

    2011-01-01

    Research highlights: → Hydrogen transport in containment with recombiners is a multi-scale problem. → A novel methodology worked out to lump the recombiner characteristics. → Results obtained using commercial code FLUENT are cast in the form of correlations. → Hence, coarse grids can obtain accurate distribution of H 2 in containment. → Satisfactory working of the methodology is clearly demonstrated. - Abstract: This paper aims at formulation of a model compatible with CFD code to simulate hydrogen distribution and mitigation using a Passive Catalytic Recombiner in the Nuclear power plant containments. The catalytic recombiner is much smaller in size compared to the containment compartments. In order to fully resolve the recombination processes during the containment simulations, it requires the geometric details of the recombiner to be modelled and a very fine mesh size inside the recombiner channels. This component when integrated with containment mixing calculations would result in a large number of mesh elements which may take large computational times to solve the problem. This paper describes a method to resolve this simulation difficulty. In this exercise, the catalytic recombiner alone was first modelled in detail using the best suited option to describe the reaction rate. A detailed parametric study was conducted, from which correlations for the heat of reaction (hence the rate of reaction) and the heat transfer coefficient were obtained. These correlations were then used to model the recombiner channels as single computational cells providing necessary volumetric sources/sinks to the energy and species transport equations. This avoids full resolution of these channels, thereby allowing larger mesh size in the recombiners. The above mentioned method was successfully validated using both steady state and transient test problems and the results indicate very satisfactory modelling of the component.

  1. Process metallurgical evaluation and application of very fine bubbling technology

    Energy Technology Data Exchange (ETDEWEB)

    Catana, C.; Gotsis, V.S.; Dourdounis, E.; Angelopoulos, G.N.; Papamantellos, D.C. [Lab. of Metallurgy, Univ. of Patras, Rio (Greece); Mavrommatis, K. [IEHK, RWTH Aachen, Aachen (Germany)

    2002-12-01

    The potential of VFB (Very Fine Bubbling)-technology in steelmaking, developed for the production of super clean steels, was investigated. Recent R and D work has proven that with very fine argon bubbling through a developed Special Porous Plug (SPP) at low flow rates, the total oxygen content of low carbon steel grades can be lowered to a level of 6 ppm under industrial vacuum conditions and to a level of 10 ppm under argon protective atmosphere. The perspective of industrial application of the VFB technology to a 56-t ladle furnace of Helliniki Halyvourgia S.A., Greece, in order to improve steel cleanliness, requires additional R and D efforts. It is important to define the limits of VFB technology in respect of alloys dissolution, mixing time and homogenisation of steel and slag/metal reactions. In this work, a gas driven bubble aqueous reactor model simulating the bottom gas stirred ladle by means of gas injection through a SPP and a conventional porous plug was studied. Various operating conditions as well as different positions for the porous plug with and without a top oil layer were simulated. Tests concerning mixing time, solid-liquid mass transfer and critical gas flow rate, liquid/liquid mass transfer, using the SPP and a conventional porous plug have been performed. The evaluation of experimental results delivered important information for the design and operation of steel ladles, applying VFB-technology. Experimental results with SPP bubbles' agitated steel (1600 C) in laboratory and technical scale experiments in IF and VIF are presented and discussed. (orig.)

  2. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  3. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  4. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  5. Clay, Water, and Salt: Controls on the Permeability of Fine-Grained Sedimentary Rocks.

    Science.gov (United States)

    Bourg, Ian C; Ajo-Franklin, Jonathan B

    2017-09-19

    The ability to predict the permeability of fine-grained soils, sediments, and sedimentary rocks is a fundamental challenge in the geosciences with potentially transformative implications in subsurface hydrology. In particular, fine-grained sedimentary rocks (shale, mudstone) constitute about two-thirds of the sedimentary rock mass and play important roles in three energy technologies: petroleum geology, geologic carbon sequestration, and radioactive waste management. The problem is a challenging one that requires understanding the properties of complex natural porous media on several length scales. One inherent length scale, referred to hereafter as the mesoscale, is associated with the assemblages of large grains of quartz, feldspar, and carbonates over distances of tens of micrometers. Its importance is highlighted by the existence of a threshold in the core scale mechanical properties and regional scale energy uses of shale formations at a clay content X clay ≈ 1/3, as predicted by an ideal packing model where a fine-grained clay matrix fills the gaps between the larger grains. A second important length scale, referred to hereafter as the nanoscale, is associated with the aggregation and swelling of clay particles (in particular, smectite clay minerals) over distances of tens of nanometers. Mesoscale phenomena that influence permeability are primarily mechanical and include, for example, the ability of contacts between large grains to prevent the compaction of the clay matrix. Nanoscale phenomena that influence permeability tend to be chemomechanical in nature, because they involve strong impacts of aqueous chemistry on clay swelling. The second length scale remains much less well characterized than the first, because of the inherent challenges associated with the study of strongly coupled nanoscale phenomena. Advanced models of the nanoscale properties of fine-grained media rely predominantly on the Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, a mean field

  6. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  7. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  8. Sahara Coal: the fine art of collecting fines for profit

    Energy Technology Data Exchange (ETDEWEB)

    Schreckengost, D.; Arnold, D.

    1984-09-01

    A considerable increase in the volume of fines in rom coal caused Sahara Coal in Illinois to redesign the fine coal system in their Harrisburg preparation plant. Details of the new design, and particularly the fine refuse system which dewaters and dries 28 mesh x O clean coal, are given. Results have exceeded expectations in reducing product losses, operating costs and slurry pond cleaning costs.

  9. A comparison of three approaches for simulating fine-scale surface winds in support of wildland fire management: Part I. Model formulation and comparison against measurements

    Science.gov (United States)

    Jason M. Forthofer; Bret W. Butler; Natalie S. Wagenbrenner

    2014-01-01

    For this study three types of wind models have been defined for simulating surface wind flow in support of wildland fire management: (1) a uniform wind field (typically acquired from coarse-resolution (,4 km) weather service forecast models); (2) a newly developed mass-conserving model and (3) a newly developed mass and momentumconserving model (referred to as the...

  10. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  11. Simulating the characteristics of tropical cyclones over the South West Indian Ocean using a Stretched-Grid Global Climate Model

    Science.gov (United States)

    Maoyi, Molulaqhooa L.; Abiodun, Babatunde J.; Prusa, Joseph M.; Veitch, Jennifer J.

    2018-03-01

    Tropical cyclones (TCs) are one of the most devastating natural phenomena. This study examines the capability of a global climate model with grid stretching (CAM-EULAG, hereafter CEU) in simulating the characteristics of TCs over the South West Indian Ocean (SWIO). In the study, CEU is applied with a variable increment global grid that has a fine horizontal grid resolution (0.5° × 0.5°) over the SWIO and coarser resolution (1° × 1°—2° × 2.25°) over the rest of the globe. The simulation is performed for the 11 years (1999-2010) and validated against the Joint Typhoon Warning Center (JTWC) best track data, global precipitation climatology project (GPCP) satellite data, and ERA-Interim (ERAINT) reanalysis. CEU gives a realistic simulation of the SWIO climate and shows some skill in simulating the spatial distribution of TC genesis locations and tracks over the basin. However, there are some discrepancies between the observed and simulated climatic features over the Mozambique channel (MC). Over MC, CEU simulates a substantial cyclonic feature that produces a higher number of TC than observed. The dynamical structure and intensities of the CEU TCs compare well with observation, though the model struggles to produce TCs with a deep pressure centre as low as the observed. The reanalysis has the same problem. The model captures the monthly variation of TC occurrence well but struggles to reproduce the interannual variation. The results of this study have application in improving and adopting CEU for seasonal forecasting over the SWIO.

  12. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  13. Grazing incidence x-ray diffraction at free-standing nanoscale islands: fine structure of diffuse scattering

    International Nuclear Information System (INIS)

    Grigoriev, D; Hanke, M; Schmidbauer, M; Schaefer, P; Konovalov, O; Koehler, R

    2003-01-01

    We have investigated the x-ray intensity distribution around 220 reciprocal lattice point in case of grazing incidence diffraction at SiGe nanoscale free-standing islands grown on Si(001) substrate by LPE. Experiments and computer simulations based on the distorted wave Born approximation utilizing the results of elasticity theory obtained by FEM modelling have been carried out. The data reveal fine structure in the distribution of scattered radiation with well-pronounced maxima and complicated fringe pattern. Explanation of the observed diffraction phenomena in their relation to structure and morphology of the island is given. An optimal island model including its shape, size and Ge spatial distribution was elaborated

  14. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  15. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  16. Cosmologically safe QCD axion without fine-tuning

    International Nuclear Information System (INIS)

    Yamada, Masaki; Yanagida, Tsutomu T.; Yonekura, Kazuya

    2015-10-01

    Although QCD axion models are widely studied as solutions to the strong CP problem, they generically confront severe fine-tuning problems to guarantee the anomalous PQ symmetry. In this letter, we propose a simple QCD axion model without any fine-tunings. We introduce an extra dimension and a pair of extra quarks living on two branes separately, which is also charged under a bulk Abelian gauge symmetry. We assume a monopole condensation on our brane at an intermediate scale, which implies that the extra quarks develop the chiral symmetry breaking and the PQ symmetry is broken. In contrast to the original Kim's model, our model explains the origin of the PQ symmetry thanks to the extra dimension and avoids the cosmological domain wall problem because of the chiral symmetry breaking in the Abelian gauge theory.

  17. WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method

    Science.gov (United States)

    Crevoisier, David; Voltz, Marc

    2013-04-01

    To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute

  18. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  19. Steel-concrete bond model for the simulation of reinforced concrete structures

    International Nuclear Information System (INIS)

    Mang, Chetra

    2015-01-01

    Reinforced concrete structure behavior can be extremely complex in the case of exceeding the cracking threshold. The composite characteristics of reinforced concrete structure should be finely presented especially in the distribution stress zone between steel-concrete at their interface. In order to compute the industrial structures, a perfect relation hypothesis between steel and concrete is supposed in which the complex phenomenon of the two-material relation is not taken into account. On the other hand, this perfect relation is unable to predict the significant disorders, the repartition, and the distribution of the cracks, which is directly linked to the steel. In literature, several numerical methods are proposed in order to finely study the concrete-steel bond behavior, but these methods give many difficulties in computing complex structures in 3D. With the results obtained in the thesis framework of Torre-Casanova (2012), the new concrete-steel bond model has been developed to improve performances (iteration numbers and computational time) and the representation (cyclic behavior) of the initial one. The new model has been verified with analytical solution of steel-concrete tie and validated with the experimental results. The new model is equally tested with the structural scale to compute the shear wall behavior in the French national project (CEOS.fr) under monotonic load. Because of the numerical difficulty in post-processing the crack opening in the complex crack formation, a new crack opening method is also developed. This method consists of using the discontinuity of relative displacement to detect the crack position or using the slip sign change between concrete-steel. The simulation-experiment comparison gives validation of not only the new concrete-steel bond model but also the new crack post-processing method. Finally, the cyclic behavior of the bond law with the non-reduced envelope is adopted and integrated in the new bond model in order to take

  20. A precision study of the fine tuning in the DiracNMSSM

    International Nuclear Information System (INIS)

    Kaminska, Anna; Ross, Graham G.; Staub, Florian; Bonn Univ.

    2014-01-01

    Recently the DiracNMSSM has been proposed as a possible solution to reduce the fine tuning in supersymmetry. We determine the degree of fine tuning needed in the DiracNMSSM with and without non-universal gaugino masses and compare it with the fine tuning in the GNMSSM. To apply reasonable cuts on the allowed parameter regions we perform a precise calculation of the Higgs mass. In addition, we include the limits from direct SUSY searches and dark matter abundance. We find that both models are comparable in terms of fine tuning, with the minimal fine tuning in the GNMSSM slightly smaller.

  1. A moving subgrid model for simulation of reflood heat transfer

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Mahaffy, John H.; Hochreiter, Lawrence E.

    2003-01-01

    In the quench front and froth region the thermal-hydraulic parameters experience a sharp axial variation. The heat transfer regime changes from single-phase liquid, to nucleate boiling, to transition boiling and finally to film boiling in a small axial distance. One of the major limitations of all the current best-estimate codes is that a relatively coarse mesh is used to solve the complex fluid flow and heat transfer problem in proximity of the quench front during reflood. The use of a fine axial mesh for the entire core becomes prohibitive because of the large computational costs involved. Moreover, as the mesh size decreases, the standard numerical methods based on a semi-implicit scheme, tend to become unstable. A subgrid model was developed to resolve the complex thermal-hydraulic problem at the quench front and froth region. This model is a Fine Hydraulic Moving Grid (FHMG) that overlies a coarse Eulerian mesh in the proximity of the quench front and froth region. The fine mesh moves in the core and follows the quench front as it advances in the core while the rods cool and quench. The FHMG software package was developed and implemented into the COBRA-TF computer code. This paper presents the model and discusses preliminary results obtained with the COBRA-TF/FHMG computer code

  2. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  3. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  4. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  5. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  6. Risk of pneumonia in obstructive lung disease: A real-life study comparing extra-fine and fine-particle inhaled corticosteroids.

    Science.gov (United States)

    Sonnappa, Samatha; Martin, Richard; Israel, Elliot; Postma, Dirkje; van Aalderen, Wim; Burden, Annie; Usmani, Omar S; Price, David B

    2017-01-01

    Regular use of inhaled corticosteroids (ICS) in patients with obstructive lung diseases has been associated with a higher risk of pneumonia, particularly in COPD. The risk of pneumonia has not been previously evaluated in relation to ICS particle size and dose used. Historical cohort, UK database study of 23,013 patients with obstructive lung disease aged 12-80 years prescribed extra-fine or fine-particle ICS. The endpoints assessed during the outcome year were diagnosis of pneumonia, acute exacerbations and acute respiratory events in relation to ICS dose. To determine the association between ICS particle size, dose and risk of pneumonia in unmatched and matched treatment groups, logistic and conditional logistic regression models were used. 14788 patients were stepped-up to fine-particle ICS and 8225 to extra-fine ICS. On unmatched analysis, patients stepping-up to extra-fine ICS were significantly less likely to be coded for pneumonia (adjusted odds ratio [aOR] 0.60; 95% CI 0.37, 0.97]); experience acute exacerbations (adjusted risk ratio [aRR] 0.91; 95%CI 0.85, 0.97); and acute respiratory events (aRR 0.90; 95%CI 0.86, 0.94) compared with patients stepping-up to fine-particle ICS. Patients prescribed daily ICS doses in excess of 700 mcg (fluticasone propionate equivalent) had a significantly higher risk of pneumonia (OR [95%CI] 2.38 [1.17, 4.83]) compared with patients prescribed lower doses, irrespective of particle size. These findings suggest that patients with obstructive lung disease on extra-fine particle ICS have a lower risk of pneumonia than those on fine-particle ICS, with those receiving higher ICS doses being at a greater risk.

  7. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, k˜−ε˜,k˜−w˜, and k˜−ω˜ , were used to close the non-simplified quasi-three dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the k˜−ω˜ model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  8. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  9. A critical assessment of flux and source term closures in shallow water models with porosity for urban flood simulations

    Science.gov (United States)

    Guinot, Vincent

    2017-11-01

    The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.

  10. Toward verifying fossil fuel CO2 emissions with the CMAQ model: motivation, model description and initial simulation.

    Science.gov (United States)

    Liu, Zhen; Bambha, Ray P; Pinto, Joseph P; Zeng, Tao; Boylan, Jim; Huang, Maoyi; Lei, Huimin; Zhao, Chun; Liu, Shishi; Mao, Jiafu; Schwalm, Christopher R; Shi, Xiaoying; Wei, Yaxing; Michelsen, Hope A

    2014-04-01

    Motivated by the question of whether and how a state-of-the-art regional chemical transport model (CTM) can facilitate characterization of CO2 spatiotemporal variability and verify CO2 fossil-fuel emissions, we for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate CO2. This paper presents methods, input data, and initial results for CO2 simulation using CMAQ over the contiguous United States in October 2007. Modeling experiments have been performed to understand the roles of fossil-fuel emissions, biosphere-atmosphere exchange, and meteorology in regulating the spatial distribution of CO2 near the surface over the contiguous United States. Three sets of net ecosystem exchange (NEE) fluxes were used as input to assess the impact of uncertainty of NEE on CO2 concentrations simulated by CMAQ. Observational data from six tall tower sites across the country were used to evaluate model performance. In particular, at the Boulder Atmospheric Observatory (BAO), a tall tower site that receives urban emissions from Denver CO, the CMAQ model using hourly varying, high-resolution CO2 fossil-fuel emissions from the Vulcan inventory and Carbon Tracker optimized NEE reproduced the observed diurnal profile of CO2 reasonably well but with a low bias in the early morning. The spatial distribution of CO2 was found to correlate with NO(x), SO2, and CO, because of their similar fossil-fuel emission sources and common transport processes. These initial results from CMAQ demonstrate the potential of using a regional CTM to help interpret CO2 observations and understand CO2 variability in space and time. The ability to simulate a full suite of air pollutants in CMAQ will also facilitate investigations of their use as tracers for CO2 source attribution. This work serves as a proof of concept and the foundation for more comprehensive examinations of CO2 spatiotemporal variability and various uncertainties in the future. Atmospheric CO2 has long been modeled

  11. Models and simulations

    International Nuclear Information System (INIS)

    Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.

    1983-09-01

    On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example

  12. Constitutive modelling of the undrained shear strength of fine grained soils containing gas

    Energy Technology Data Exchange (ETDEWEB)

    Grozic, J.L.H. [Calgary Univ., AB (Canada); Nadim, F.; Kvalstad, T.J. [Norwegian Geotechnical Inst., Oslo (Norway)

    2002-07-01

    The behaviour of fine grained gassy soils was studied in order to develop a technique to quantitatively evaluate geohazards. Gas can occur in seabeds either in solution in pore water, undissolved in the form of gas filled voids, or as gas hydrates. In offshore soils, the degree of saturation is generally greater than 90 per cent, resulting in a soil structure with a continuous water phase and a discontinuous gas phase. The presence of methane gas will impact the strength of the soil, which alters its resistance to submarine sliding. This paper presents a constitutive model for determining the undrained shear strength of fine-grained gassy soils to assess the stability of deep water marine slopes for offshore developments. Methane gas is shown to have a beneficial effect on the soil strength in compressive loading, but the peak strength is achieved at larger deformations. The increased strength is a result of compression and solution gas which cause partial drainage and reduced pore pressures. The undrained shear strength of gassy soils was shown to increase with increasing initial consolidation stress, increasing volumetric coefficient of solubility, and increasing initial void ratio. 9 refs., 3 tabs., 6 figs.

  13. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    Larsen, N.

    1987-03-01

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  14. Dispersibility of lactose fines as compared to API in dry powders for inhalation.

    Science.gov (United States)

    Thalberg, Kyrre; Åslund, Simon; Skogevall, Marcus; Andersson, Patrik

    2016-05-17

    This work investigates the dispersion performance of fine lactose particles as function of processing time, and compares it to the API, using Beclomethasone Dipropionate (BDP) as model API. The total load of fine particles is kept constant in the formulations while the proportions of API and lactose fines are varied. Fine particle assessment demonstrates that the lactose fines have higher dispersibility than the API. For standard formulations, processing time has a limited effect on the Fine Particle Fraction (FPF). For formulations containing magnesium stearate (MgSt), FPF of BDP is heavily influenced by processing time, with an initial increase, followed by a decrease at longer mixing times. An equation modeling the observed behavior is presented. Surprisingly, the dispersibility of the lactose fines present in the same formulation remains unaffected by mixing time. Magnesium analysis demonstrates that MgSt is transferred to the fine particles during the mixing process, thus lubrication both BDP and lactose fines, which leads to an increased FPF. Dry particle sizing of the formulations reveals a loss of fine particles at longer mixing times. Incorporation of fine particles into the carrier surfaces is believed to be behind this, and is hence a mechanism of importance as regards the dispersion performance of dry powders for inhalation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  16. Modeling and Simulation for Safeguards

    International Nuclear Information System (INIS)

    Swinhoe, Martyn T.

    2012-01-01

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  17. A Framework for Parallel Numerical Simulations on Multi-Scale Geometries

    KAUST Repository

    Varduhn, Vasco

    2012-06-01

    In this paper, an approach on performing numerical multi-scale simulations on fine detailed geometries is presented. In particular, the focus lies on the generation of sufficient fine mesh representations, whereas a resolution of dozens of millions of voxels is inevitable in order to sufficiently represent the geometry. Furthermore, the propagation of boundary conditions is investigated by using simulation results on the coarser simulation scale as input boundary conditions on the next finer scale. Finally, the applicability of our approach is shown on a two-phase simulation for flooding scenarios in urban structures running from a city wide scale to a fine detailed in-door scale on feature rich building geometries. © 2012 IEEE.

  18. Fine-grain reconfigurable platform: FPGA hardware design and software toolset development

    International Nuclear Information System (INIS)

    Pappas, I; Kalenteridis, V; Vassiliadis, N; Pournara, H; Siozios, K; Koutroumpezis, G; Tatas, K; Nikolaidis, S; Siskos, S; Soudris, D J; Thanailakis, A

    2005-01-01

    A complete system for the implementation of digital logic in a fine-grain reconfigurable platform is introduced. The system is composed of two parts. The fine-grain reconfigurable hardware platform (FPGA) on which the logic is implemented and the set of CAD tools for mapping logic to the FPGA platform. A novel energy-efficient FPGA architecture is presented (CLB, interconnect network, configuration hardware) and simulated in STM 0.18 μm CMOS technology. Concerning the tool flow, each tool can operate as a standalone program as well as part of a complete design framework, composed by existing and new tools

  19. Fine-grain reconfigurable platform: FPGA hardware design and software toolset development

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, I [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Kalenteridis, V [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Vassiliadis, N [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Pournara, H [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Siozios, K [VLSI Design and Testing Center, Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi (Greece); Koutroumpezis, G [VLSI Design and Testing Center, Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi (Greece); Tatas, K [VLSI Design and Testing Center, Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi (Greece); Nikolaidis, S [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Siskos, S [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece); Soudris, D J [VLSI Design and Testing Center, Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi (Greece); Thanailakis, A [Electronics and Computers Div., Department of Physics, Aristotle University of Thessaloniki, 54006 Thessaloniki (Greece)

    2005-01-01

    A complete system for the implementation of digital logic in a fine-grain reconfigurable platform is introduced. The system is composed of two parts. The fine-grain reconfigurable hardware platform (FPGA) on which the logic is implemented and the set of CAD tools for mapping logic to the FPGA platform. A novel energy-efficient FPGA architecture is presented (CLB, interconnect network, configuration hardware) and simulated in STM 0.18 {mu}m CMOS technology. Concerning the tool flow, each tool can operate as a standalone program as well as part of a complete design framework, composed by existing and new tools.

  20. Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter

    Science.gov (United States)

    Liu, Xiaozhen; Frey, H. Christopher

    2012-01-01

    A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000

  1. Fine Coining of Bulk Metal Formed Parts in Digital Environment

    International Nuclear Information System (INIS)

    Pepelnjak, T.; Kuzman, K.; Krusic, V.

    2007-01-01

    At present the production of bulk metal formed parts in the automotive industry must increasingly fulfil demands for narrow tolerance fields. The final goal of the million parts production series is oriented towards zero defect production. This is possible by achieving production tolerances which are even tighter than the prescribed ones. Different approaches are used to meet this demanding objective affected by many process parameters. Fine coining as a final forming operation is one of the processes which enables the production of good manufacturing tolerances and high process stability. The paper presents the analyses of the production of the inner race and a digital evaluation of manufacturing tolerances caused by different material parameters of the workpiece. Digital optimisation of the fine coining with FEM simulations was performed in two phases. Firstly, fine coining of the inner racer in a digital environment was comparatively analysed with the experimental work in order to verify the accuracy and reliability of digitally calculated data. Secondly, based on the geometrical data of a digitally fine coined part, tool redesign was proposed in order to tighten production tolerances and increase the process stability of the near-net-shaped cold formed part

  2. Three-Dimensional Numerical Simulation to Mud Turbine for LWD

    Science.gov (United States)

    Yao, Xiaojiang; Dong, Jingxin; Shang, Jie; Zhang, Guanqi

    Hydraulic performance analysis was discussed for a type of turbine on generator used for LWD. The simulation models were built by CFD analysis software FINE/Turbo, and full three-dimensional numerical simulation was carried out for impeller group. The hydraulic parameter such as power, speed and pressure drop, were calculated in two kinds of medium water and mud. Experiment was built in water environment. The error of numerical simulation was less than 6%, verified by experiment. Based on this rationalization proposals would be given to choice appropriate impellers, and the rationalization of methods would be explored.

  3. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  4. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  5. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  6. Condensation front modelling in a moisture separator reheater by application of SICLE numerical model

    International Nuclear Information System (INIS)

    Grange, J.L.; Caremoli, C.; Eddi, M.

    1988-01-01

    This paper presents improvements performed on SICLE numerical model in order to analyse the condensation front that occurs in the moisture separator reheaters (MSR) of nuclear power plants. Modifications of SICLE numerical model architecture and a fine modelling of reheater have allowed to correctly simulate the MSR thermohydraulic behaviour during a severe transient (plant islanding) [fr

  7. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  8. Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling

    Science.gov (United States)

    Schum, William K.; Doolittle, Christina M.; Boyarko, George A.

    2006-05-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.

  9. Numerical simulation of Higgs models

    International Nuclear Information System (INIS)

    Jaster, A.

    1995-10-01

    The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)

  10. Facilitating Fine Grained Data Provenance using Temporal Data Model

    NARCIS (Netherlands)

    Huq, M.R.; Wombacher, Andreas; Apers, Peter M.G.

    2010-01-01

    E-science applications use fine grained data provenance to maintain the reproducibility of scientific results, i.e., for each processed data tuple, the source data used to process the tuple as well as the used approach is documented. Since most of the e-science applications perform on-line

  11. Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models

    Directory of Open Access Journals (Sweden)

    S. Song

    2018-05-01

    Full Text Available pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic to as high as 7 (neutral. In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between −2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase measurements as inputs (i.e., forward mode are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5 for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid or metastable (only liquid, does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption resulted from coding errors in the model, which have been identified and fixed in this study.

  12. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  13. Charge neutrality of fine particle (dusty) plasmas and fine particle cloud under gravity

    Energy Technology Data Exchange (ETDEWEB)

    Totsuji, Hiroo, E-mail: totsuji-09@t.okadai.jp

    2017-03-11

    The enhancement of the charge neutrality due to the existence of fine particles is shown to occur generally under microgravity and in one-dimensional structures under gravity. As an application of the latter, the size and position of fine particle clouds relative to surrounding plasmas are determined under gravity. - Highlights: • In fine particle (dusty) plasmas, the charge neutrality is much enhanced by the existence of fine particles. • The enhancement of charge neutrality generally occurs under microgravity and gravity. • Structure of fine particle clouds under gravity is determined by applying the enhanced charge neutrality.

  14. Modeling, simulation, and concept design for hybrid-electric medium-size military trucks

    Science.gov (United States)

    Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark

    2005-05-01

    A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. Modeling salmonella Dublin into the dairy herd simulation model Simherd

    DEFF Research Database (Denmark)

    Kudahl, Anne Braad

    2010-01-01

    Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...

  17. Standard model and fine structure constant at Planck distances in the Bennett-Brene-Nielsen-Picek random dynamics

    International Nuclear Information System (INIS)

    Laperashvili, L.V.

    1994-01-01

    The first part of the present paper contains a review of papers by Nielsen, Bennett, Brene and Picek which underly the model called random dynamics. The second part of the paper is devoted to calculating the fine structure constant by means of the path integration in the U(1)-lattice gauge theory

  18. Computer simulations of auxetic foams in two dimensions

    International Nuclear Information System (INIS)

    Pozniak, A A; Smardzewski, J; Wojciechowski, K W

    2013-01-01

    Two simple models of two-dimensional auxetic (i.e. negative Poisson’s ratio) foams are studied by computer simulations. In the first one, further referred to as a Y-model, the ribs forming the cells of the foam are connected at points corresponding to sites of a disordered honeycomb lattice. In the second one, coined a Δ-model, the connections of the ribs are not point-like but spatial. For simplicity, they are represented by triangles centered at the honeycomb lattice points. Three kinds of joints are considered for each model, soft, normal and hard, respectively corresponding to materials with Young’s modulus ten times smaller than, equal to and ten times larger than that of the ribs. The initial lattices are uniformly compressed, which decreases their linear dimensions by about 15%. The resulting structures are then used as reference structures with no internal stress. The Poisson’s ratios of these reference structures are determined by stretching them, in either the x or the y direction. The results obtained for finite meshes and finite samples are extrapolated to infinitely fine mesh and to the thermodynamic limit, respectively. The extrapolations indicate that meshes with as few as 13 nodes across a rib and samples as small as containing 16 × 16 cells approximate the Poisson’s ratios of systems of infinite size and infinite mesh resolution within the statistical accuracy of the experiments, i.e. a few per cent. The simulations show that by applying harder joints one can reach lower Poisson’s ratios, i.e. foams with more auxetic properties. It also follows from the simulations performed that the Δ-model gives lower Poisson’s ratios than the Y-model. Finally, the simulations using fine meshes for the samples are compared with the ones in which the ribs are approximated by Timoshenko beams. Taking into account simplifications in the latter model, the agreement is surprisingly good. (paper)

  19. Get out of Fines Free: Recruiting Student Usability Testers via Fine Waivers

    Science.gov (United States)

    Hockenberry, Benjamin; Blackburn, Kourtney

    2016-01-01

    St. John Fisher College's Lavery Library's Access Services and Systems departments began a pilot project in which students with overdue fines tested usability of library Web sites in exchange for fine waivers. Circulation staff promoted the program and redeemed fine waiver vouchers at the Checkout Desk, while Systems staff administered testing and…

  20. MATH MODELING OF CAST FINE-GRAINED CONCRETE WITH INDUSTRIAL WASTES OF COPPER PRODUCTION

    Directory of Open Access Journals (Sweden)

    Tsybakin Sergey Valerievich

    2017-10-01

    Full Text Available Subject: applying mineral microfillers on the basis of technogenic wastes of non-ferrous metallurgy in the technology of cast and self-compacting concrete. The results of experiments of scientists from Russia, Kazakhstan, Poland and India show that copper smelting granulated slag can be used when grinding construction cements as a mineral additive up to 30 % without significantly reducing activity of the cements. However, there are no results of a comprehensive study of influence of the slag on plastic concrete mixtures. Research objectives: establishment of mathematical relationship of the influence of copper slag on the compressive strength and density of concrete after 28 days of hardening in normal conditions using the method of mathematical design of experiments; statistical processing of the results and verification of adequacy of the developed model. Materials and methods: mathematical experimental design was carried out as a full 4-factor experiment using rotatable central composite design. The mathematical model is selected in the form of a polynomial of the second degree using four factors of the response function. Results: 4-factor mathematical model of concrete strength and density after curing is created, regression equation is derived for dependence of the 28-days strength function and density on concentration of the cement stone, true water-cement ratio, dosage of fine copper slag and superplasticizer on the basis of ether polycarboxylates. Statistical processing of the results of mathematical design of experiments is carried out, estimate of adequacy of the constructed mathematical model is obtained. Conclusions: it is established that introduction of copper smelting slag in the range of 30…50 % by weight of cement positively affects the strength of concrete when used together with the superplasticizer. Increasing the dosage of superplasticizer in excess of 0.16 % of the dry component leads to a decrease in the strength of cast

  1. The effect of amblyopia on fine motor skills in children.

    Science.gov (United States)

    Webber, Ann L; Wood, Joanne M; Gole, Glen A; Brown, Brian

    2008-02-01

    In an investigation of the functional impact of amblyopia in children, the fine motor skills of amblyopes and age-matched control subjects were compared. The influence of visual factors that might predict any decrement in fine motor skills was also explored. Vision and fine motor skills were tested in a group of children (n = 82; mean age, 8.2 +/- 1.7 [SD] years) with amblyopia of different causes (infantile esotropia, n = 17; acquired strabismus, n = 28; anisometropia, n = 15; mixed, n = 13; and deprivation n = 9), and age-matched control children (n = 37; age 8.3 +/- 1.3 years). Visual motor control (VMC) and upper limb speed and dexterity (ULSD) items of the Bruininks-Oseretsky Test of Motor Proficiency were assessed, and logMAR visual acuity (VA) and Randot stereopsis were measured. Multiple regression models were used to identify the visual determinants of fine motor skills performance. Amblyopes performed significantly poorer than control subjects on 9 of 16 fine motor skills subitems and for the overall age-standardized scores for both VMC and ULSD items (P multiple regression model that took into account the intercorrelation between visual characteristics, poorer fine motor skills performance was associated with strabismus (F(1,75) = 5.428; P = 0.022), but not with the level of binocular function, refractive error, or visual acuity in either eye. Fine motor skills were reduced in children with amblyopia, particularly those with strabismus, compared with control subjects. The deficits in motor performance were greatest on manual dexterity tasks requiring speed and accuracy.

  2. Bayesian network modelling on data from fine needle aspiration cytology examination for breast cancer diagnosis

    OpenAIRE

    Ding, Xuemei; Cao, Yi; Zhai, Jia; Maguire, Liam; Li, Yuhua; Yang, Hongqin; Wang, Yuhua; Zeng, Jinshu; Liu, Shuo

    2017-01-01

    The paper employed Bayesian network (BN) modelling approach to discover causal dependencies among different data features of Breast Cancer Wisconsin Dataset (BCWD) derived from openly sourced UCI repository. K2 learning algorithm and k-fold cross validation were used to construct and optimize BN structure. Compared to Na‹ve Bayes (NB), the obtained BN presented better performance for breast cancer diagnosis based on fine needle aspiration cytology (FNAC) examination. It also showed that, amon...

  3. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  4. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    Science.gov (United States)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  5. Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.C.; Mani, V.; Mohanty, K.K. [Univ. of Houston, TX (United States)

    1997-08-01

    Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.

  6. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  7. Deficits in fine motor skills in a genetic animal model of ADHD

    Directory of Open Access Journals (Sweden)

    Qian Yu

    2010-09-01

    Full Text Available Abstract Background In an attempt to model some behavioral aspects of Attention Deficit/Hyperactivity Disorder (ADHD, we examined whether an existing genetic animal model of ADHD is valid for investigating not only locomotor hyperactivity, but also more complex motor coordination problems displayed by the majority of children with ADHD. Methods We subjected young adolescent Spontaneously Hypertensive Rats (SHRs, the most commonly used genetic animal model of ADHD, to a battery of tests for motor activity, gross motor coordination, and skilled reaching. Wistar (WIS rats were used as controls. Results Similar to children with ADHD, young adolescent SHRs displayed locomotor hyperactivity in a familiar, but not in a novel environment. They also had lower performance scores in a complex skilled reaching task when compared to WIS rats, especially in the most sensitive measure of skilled performance (i.e., single attempt success. In contrast, their gross motor performance on a Rota-Rod test was similar to that of WIS rats. Conclusion The results support the notion that the SHR strain is a useful animal model system to investigate potential molecular mechanisms underlying fine motor skill problems in children with ADHD.

  8. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  9. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  10. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  11. Focus point in gaugino mediation — Reconsideration of the fine-tuning problem

    Energy Technology Data Exchange (ETDEWEB)

    Yanagida, Tsutomu T.; Yokozaki, Norimi, E-mail: n.yokozaki@gmail.com

    2013-05-24

    We reconsider the fine-tuning problem in SUSY models, motivated by the recent observation of the relatively heavy Higgs boson and non-observation of the SUSY particles at the LHC. Based on this thought, we demonstrate a focus point-like behavior in a gaugino mediation model, and show that the fine-tuning is indeed reduced to about 2% level if the ratio of the gluino mass to wino mass is about 0.4 at the GUT scale. We show that such a mass ratio may arise naturally in a product group unification model without the doublet–triplet splitting problem. This fact suggests that the fine-tuning problem crucially depends on the physics at the high energy scale.

  12. Use case driven approach to develop simulation model for PCS of APR1400 simulator

    International Nuclear Information System (INIS)

    Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang; Byung Hwan, Bae

    2006-01-01

    The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and the resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)

  13. Simulating smoke transport from wildland fires with a regional-scale air quality model: sensitivity to spatiotemporal allocation of fire emissions.

    Science.gov (United States)

    Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T

    2014-09-15

    Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Plasma modelling and numerical simulation

    International Nuclear Information System (INIS)

    Van Dijk, J; Kroesen, G M W; Bogaerts, A

    2009-01-01

    Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)

  15. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  16. An application of sedimentation simulation in Tahe oilfield

    Science.gov (United States)

    Tingting, He; Lei, Zhao; Xin, Tan; Dongxu, He

    2017-12-01

    The braided river delta develops in Triassic low oil formation in the block 9 of Tahe oilfield, but its sedimentation evolution process is unclear. By using sedimentation simulation technology, sedimentation process and distribution of braided river delta are studied based on the geological parameters including sequence stratigraphic division, initial sedimentation environment, relative lake level change and accommodation change, source supply and sedimentary transport pattern. The simulation result shows that the error rate between strata thickness of simulation and actual strata thickness is small, and the single well analysis result of simulation is highly consistent with the actual analysis, which can prove that the model is reliable. The study area belongs to braided river delta retrogradation evolution process, which provides favorable basis for fine reservoir description and prediction.

  17. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  18. Anne Fine

    Directory of Open Access Journals (Sweden)

    Philip Gaydon

    2015-04-01

    Full Text Available An interview with Anne Fine with an introduction and aside on the role of children’s literature in our lives and development, and our adult perceptions of the suitability of childhood reading material. Since graduating from Warwick in 1968 with a BA in Politics and History, Anne Fine has written over fifty books for children and eight for adults, won the Carnegie Medal twice (for Goggle-Eyes in 1989 and Flour Babies in 1992, been a highly commended runner-up three times (for Bill’s New Frock in 1989, The Tulip Touch in 1996, and Up on Cloud Nine in 2002, been shortlisted for the Hans Christian Andersen Award (the highest recognition available to a writer or illustrator of children’s books, 1998, undertaken the positon of Children’s Laureate (2001-2003, and been awarded an OBE for her services to literature (2003. Warwick presented Fine with an Honorary Doctorate in 2005. Philip Gaydon’s interview with Anne Fine was recorded as part of the ‘Voices of the University’ oral history project, co-ordinated by Warwick’s Institute of Advanced Study.

  19. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  20. Modeling and Simulation of U-tube Steam Generator

    Science.gov (United States)

    Zhang, Mingming; Fu, Zhongguang; Li, Jinyao; Wang, Mingfei

    2018-03-01

    The U-tube natural circulation steam generator was mainly researched with modeling and simulation in this article. The research is based on simuworks system simulation software platform. By analyzing the structural characteristics and the operating principle of U-tube steam generator, there are 14 control volumes in the model, including primary side, secondary side, down channel and steam plenum, etc. The model depends completely on conservation laws, and it is applied to make some simulation tests. The results show that the model is capable of simulating properly the dynamic response of U-tube steam generator.

  1. Modeling the migration of fallout radionuclides to quantify the contemporary transfer of fine particles in Luvisol profiles under different land uses and farming practices

    International Nuclear Information System (INIS)

    Jagercikova, M.; Balesdent, J.; Cornu, S.; Evrard, O.; Lefevre, I.

    2014-01-01

    Soil mixing and the downward movement of solid matter in soils are dynamic pedological processes that strongly affect the vertical distribution of all soil properties across the soil profile. These processes are affected by land use and the implementation of various farming practices, but their kinetics have rarely been quantified. Our objective was to investigate the vertical transfer of matter in Luvisols at long-term experimental sites under different land uses (cropland, grassland and forest) and different farming practices (conventional tillage, reduced tillage and no tillage). To investigate these processes, the vertical radionuclide distributions of 137 Cs and 210 Pb (xs) were analyzed in 9 soil profiles. The mass balance calculations showed that as much as 91± 9% of the 137 Cs was linked to the fine particles (2 mm). To assess the kinetics of radionuclide redistribution in soil, we modeled their depth profiles using a convection-diffusion equation. The diffusion coefficient represented the rate of bioturbation, and the convection velocity provided a proxy for fine particle leaching. Both parameters were modeled as either constant or variable with depth. The tillage was simulated using an empirical formula that considered the tillage depth and a variable mixing ratio depending on the type of tillage used. A loss of isotopes due to soil erosion was introduced into the model to account for the total radionuclide inventory. All of these parameters were optimized based on the 137 Cs data and were then subsequently applied to the 210 Pb (xs) data. Our results show that the 137 Cs isotopes migrate deeper under grasslands than under forests or croplands. Additionally, our results suggest that the diffusion coefficient decreased with depth and that it remained negligible below the tillage depth at the cropland sites, below 20 cm in the forest sites, and below 80 cm in the grassland sites. (authors)

  2. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  3. System modeling and simulation at EBR-II

    International Nuclear Information System (INIS)

    Dean, E.M.; Lehto, W.K.; Larson, H.A.

    1986-01-01

    The codes being developed and verified using EBR-II data are the NATDEMO, DSNP and CSYRED. NATDEMO is a variation of the Westinghouse DEMO code coupled to the NATCON code previously used to simulate perturbations of reactor flow and inlet temperature and loss-of-flow transients leading to natural convection in EBR-II. CSYRED uses the Continuous System Modeling Program (CSMP) to simulate the EBR-II core, including power, temperature, control-rod movement reactivity effects and flow and is used primarily to model reactivity induced power transients. The Dynamic Simulator for Nuclear Power Plants (DSNP) allows a whole plant, thermal-hydraulic simulation using specific component and system models called from libraries. It has been used to simulate flow coastdown transients, reactivity insertion events and balance-of-plant perturbations

  4. Generation of reservoir models on flexible meshes; Generation de modeles de reservoir sur maillage flexible

    Energy Technology Data Exchange (ETDEWEB)

    Ricard, L.

    2005-12-15

    The high level geo-statistic description of the subsurface are often far too detailed for use in routine flow simulators. To make flow simulations tractable, the number of grid blocks has to be reduced: an approximation, still relevant with flow description, is necessary. In this work, we place the emphasis on the scaling procedure from the fine scale model to the multi-scale reservoir model. Two main problems appear: Near wells, faults and channels, the volume of flexible cells may be less than fine ones, so we need to solve a down-scaling problem; Far from these regions, the volume of cells are bigger than fine ones so we need to solve an up-scaling problem. In this work, research has been done on each of these three areas: down-scaling, up-scaling and fluid flow simulation. For each of these subjects, a review, some news improvements and comparative study are proposed. The proposed down-scaling method is build to be compatible with existing data integration methods. The comparative study shows that empirical methods are not enough accurate to solve the problem. Concerning the up-scaling step, the proposed approach is based on an existing method: the perturbed boundary conditions. An extension to unstructured mesh is developed for the inter-cell permeability tensor. The comparative study shows that numerical methods are not always as accurate as expected and the empirical model can be sufficient in lot of cases. A new approach to single-phase fluid flow simulation is developed. This approach can handle with full tensorial permeability fields with source or sink terms.(author)

  5. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  6. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  7. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  8. Bayesian integration of flux tower data into a process-based simulator for quantifying uncertainty in simulated output

    Science.gov (United States)

    Raj, Rahul; van der Tol, Christiaan; Hamm, Nicholas Alexander Samuel; Stein, Alfred

    2018-01-01

    Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP) data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT), ratio of fine root carbon to leaf carbon (FRC : LC), ratio of carbon to nitrogen in leaf (C : Nleaf), canopy water interception coefficient (Wint), fraction of leaf nitrogen in RuBisCO (FLNR), and effective soil rooting depth (SD) characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash-Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.

  9. Bayesian integration of flux tower data into a process-based simulator for quantifying uncertainty in simulated output

    Directory of Open Access Journals (Sweden)

    R. Raj

    2018-01-01

    Full Text Available Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT, ratio of fine root carbon to leaf carbon (FRC : LC, ratio of carbon to nitrogen in leaf (C : Nleaf, canopy water interception coefficient (Wint, fraction of leaf nitrogen in RuBisCO (FLNR, and effective soil rooting depth (SD characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash–Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.

  10. Guidelines for Reproducibly Building and Simulating Systems Biology Models.

    Science.gov (United States)

    Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R

    2016-10-01

    Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.

  11. First principles simulation of amorphous InSb

    Science.gov (United States)

    Los, Jan H.; Kühne, Thomas D.; Gabardi, Silvia; Bernasconi, Marco

    2013-05-01

    Ab initio molecular dynamics simulations based on density functional theory have been performed to generate a model of amorphous InSb by quenching from the melt. The resulting network is mostly tetrahedral with a minor fraction (10%) of atoms in a fivefold coordination. The structural properties are in good agreement with available x-ray diffraction and extended x-ray-absorption fine structure data and confirm the proposed presence of a sizable fraction of homopolar In-In and Sb-Sb bonds whose concentration in our model amounts to about 20% of the total number of bonds.

  12. Optimal Design of Experiments by Combining Coarse and Fine Measurements

    Science.gov (United States)

    Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.

    2017-11-01

    In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.

  13. Improving dynamic global vegetation model (DGVM) simulation of western U.S. rangelands vegetation seasonal phenology and productivity

    Science.gov (United States)

    Kerns, B. K.; Kim, J. B.; Day, M. A.; Pitts, B.; Drapek, R. J.

    2017-12-01

    Ecosystem process models are increasingly being used in regional assessments to explore potential changes in future vegetation and NPP due to climate change. We use the dynamic global vegetation model MAPSS-Century 2 (MC2) as one line of evidence for regional climate change vulnerability assessments for the US Forest Service, focusing our fine tuning model calibration from observational sources related to forest vegetation. However, there is much interest in understanding projected changes for arid rangelands in the western US such as grasslands, shrublands, and woodlands. Rangelands provide many ecosystem service benefits and local rural human community sustainability, habitat for threatened and endangered species, and are threatened by annual grass invasion. Past work suggested MC2 performance related to arid rangeland plant functional types (PFT's) was poor, and the model has difficulty distinguishing annual versus perennial grasslands. Our objectives are to increase the model performance for rangeland simulations and explore the potential for splitting the grass plant functional type into annual and perennial. We used the tri-state Blue Mountain Ecoregion as our study area and maps of potential vegetation from interpolated ground data, the National Land Cover Data Database, and ancillary NPP data derived from the MODIS satellite. MC2 historical simulations for the area overestimated woodland occurrence and underestimated shrubland and grassland PFT's. The spatial location of the rangeland PFT's also often did not align well with observational data. While some disagreement may be due to differences in the respective classification rules, the errors are largely linked to MC2's tree and grass biogeography and physiology algorithms. Presently, only grass and forest productivity measures and carbon stocks are used to distinguish PFT's. MC2 grass and tree productivity simulation is problematic, in particular grass seasonal phenology in relation to seasonal patterns

  14. Source contributions to atmospheric fine carbon particle concentrations

    Science.gov (United States)

    Andrew Gray, H.; Cass, Glen R.

    A Lagrangian particle-in-cell air quality model has been developed that facilitates the study of source contributions to atmospheric fine elemental carbon and fine primary total carbon particle concentrations. Model performance was tested using spatially and temporally resolved emissions and air quality data gathered for this purpose in the Los Angeles area for the year 1982. It was shown that black elemental carbon (EC) particle concentrations in that city were dominated by emissions from diesel engines including both on-highway and off-highway applications. Fine primary total carbon particle concentrations (TC=EC+organic carbon) resulted from the accumulation of small increments from a great variety of emission source types including both gasoline and diesel powered highway vehicles, stationary source fuel oil and gas combustion, industrial processes, paved road dust, fireplaces, cigarettes and food cooking (e.g. charbroilers). Strategies for black elemental carbon particle concentration control will of necessity need to focus on diesel engines, while controls directed at total carbon particle concentrations will have to be diversified over a great many source types.

  15. Evaluation and comparison of models and modelling tools simulating nitrogen processes in treatment wetlands

    DEFF Research Database (Denmark)

    Edelfeldt, Stina; Fritzson, Peter

    2008-01-01

    with Modelica 2.1 (Wiley-IEEE Press, USA, 2004).] and an associated tool. The differences and similarities between the MathModelica Model Editor and three other ecological modelling tools have also been evaluated. The results show that the models can well be modelled and simulated in the MathModelica Model...... Editor, and that nitrogen decrease in a constructed treatment wetland should be described and simulated using the Nitrification/Denitrification model as this model has the highest overall quality score and provides a more variable environment.......In this paper, two ecological models of nitrogen processes in treatment wetlands have been evaluated and compared. These models were implemented, simulated, and visualized using the Modelica modelling and simulation language [P. Fritzson, Principles of Object-Oriented Modelling and Simulation...

  16. Simulation as a vehicle for enhancing collaborative practice models.

    Science.gov (United States)

    Jeffries, Pamela R; McNelis, Angela M; Wheeler, Corinne A

    2008-12-01

    Clinical simulation used in a collaborative practice approach is a powerful tool to prepare health care providers for shared responsibility for patient care. Clinical simulations are being used increasingly in professional curricula to prepare providers for quality practice. Little is known, however, about how these simulations can be used to foster collaborative practice across disciplines. This article provides an overview of what simulation is, what collaborative practice models are, and how to set up a model using simulations. An example of a collaborative practice model is presented, and nursing implications of using a collaborative practice model in simulations are discussed.

  17. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    Alejandro, R.; Udbinac, M.J.

    2006-01-01

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  18. Assessment of Psychophysiological Response and Specific Fine Motor Skills in Combat Units.

    Science.gov (United States)

    Sánchez-Molina, Joaquín; Robles-Pérez, José J; Clemente-Suárez, Vicente J

    2018-03-02

    Soldiers´ training and experience can influence the outcome of the missions, as well as their own physical integrity. The objective of this research was to analyze the psycho-physiological response and specific motor skills in an urban combat simulation with two units of infantry with different training and experience. psychophysiological parameters -Heart Rate, blood oxygen saturation, glucose and blood lactate, cortical activation, anxiety and heart rate variability-, as well as fine motor skills were analyzed in 31 male soldiers of the Spanish Army, 19 belonging to the Light Infantry Brigade, and 12 to the Heavy Forces Infantry Brigade, before and after an urban combat simulation. A combat simulation provokes an alteration of the psycho-physiological basal state in soldiers and a great unbalance in the sympathetic-vagal interaction. The specific training of Light Infantry unit involves lower metabolic, cardiovascular, and anxiogenic response not only previous, but mainly after a combat maneuver, than Heavy Infantry unit's. No differences were found in relation with fine motor skills, improving in both cases after the maneuver. This fact should be taken into account for betterment units´ deployment preparation in current theaters of operations.

  19. “Space, the Final Frontier”: How Good are Agent-Based Models at Simulating Individuals and Space in Cities?

    Directory of Open Access Journals (Sweden)

    Alison Heppenstall

    2016-01-01

    Full Text Available Cities are complex systems, comprising of many interacting parts. How we simulate and understand causality in urban systems is continually evolving. Over the last decade the agent-based modeling (ABM paradigm has provided a new lens for understanding the effects of interactions of individuals and how through such interactions macro structures emerge, both in the social and physical environment of cities. However, such a paradigm has been hindered due to computational power and a lack of large fine scale datasets. Within the last few years we have witnessed a massive increase in computational processing power and storage, combined with the onset of Big Data. Today geographers find themselves in a data rich era. We now have access to a variety of data sources (e.g., social media, mobile phone data, etc. that tells us how, and when, individuals are using urban spaces. These data raise several questions: can we effectively use them to understand and model cities as complex entities? How well have ABM approaches lent themselves to simulating the dynamics of urban processes? What has been, or will be, the influence of Big Data on increasing our ability to understand and simulate cities? What is the appropriate level of spatial analysis and time frame to model urban phenomena? Within this paper we discuss these questions using several examples of ABM applied to urban geography to begin a dialogue about the utility of ABM for urban modeling. The arguments that the paper raises are applicable across the wider research environment where researchers are considering using this approach.

  20. Regional model simulations of New Zealand climate

    Science.gov (United States)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  1. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  2. Effects of constraints on lattice re-orientation and strain in polycrystal plasticity simulations

    DEFF Research Database (Denmark)

    Haldrup, Martin Kristoffer; McGinty, R.D.; McDowell, D.L.

    2009-01-01

    -constraint simulations while fine scale element-resolved analysis shows large deviations from this prediction. Locally resolved analysis shows the existence of large domains dominated by slip on only a few slip systems. The modelling results are discussed in the light of recent experimental advances with respect to 2...

  3. Mars Exploration Rover Terminal Descent Mission Modeling and Simulation

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.

    2004-01-01

    Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.

  4. Administration of Oxygen Ultra-Fine Bubbles Improves Nerve Dysfunction in a Rat Sciatic Nerve Crush Injury Model

    Directory of Open Access Journals (Sweden)

    Hozo Matsuoka

    2018-05-01

    Full Text Available Ultra-fine bubbles (<200 nm in diameter have several unique properties and have been tested in various medical fields. The purpose of this study was to investigate the effects of oxygen ultra-fine bubbles (OUBs on a sciatic nerve crush injury (SNC model rats. Rats were intraperitoneally injected with 1.5 mL saline, OUBs diluted in saline, or nitrogen ultra-fine bubbles (NUBs diluted in saline three times per week for 4 weeks in four groups: (1 control, (sham operation + saline; (2 SNC, (crush + saline; (3 SNC+OUB, (crush + OUB-saline; (4 SNC+NUB, (crush + NUB-saline. The effects of the OUBs on dorsal root ganglion (DRG neurons and Schwann cells (SCs were examined by serial dilution of OUB medium in vitro. Sciatic functional index, paw withdrawal thresholds, nerve conduction velocity, and myelinated axons were significantly decreased in the SNC group compared to the control group; these parameters were significantly improved in the SNC+OUB group, although NUB treatment did not affect these parameters. In vitro, OUBs significantly promoted neurite outgrowth in DRG neurons by activating AKT signaling and SC proliferation by activating ERK1/2 and JNK/c-JUN signaling. OUBs may improve nerve dysfunction in SNC rats by promoting neurite outgrowth in DRG neurons and SC proliferation.

  5. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  6. Simulation-Based Internal Models for Safer Robots

    Directory of Open Access Journals (Sweden)

    Christian Blum

    2018-01-01

    Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

  7. Migration of cesium-137 through sandy soil layer effect of fine silt on migration

    International Nuclear Information System (INIS)

    Ohnuki, Toshihiko; Wadachi, Yoshiki

    1983-01-01

    The migration of 137 Cs through sandy soil layer was studied with consideration of the migration of fine silt by column method. It was found that a portion of fine silt migrated through the soil layer accompanying with 137 Cs. The mathematical migration model of 137 Cs involved the migration of fine silt through such soil layer was presented. This model gave a good accordance between calculated concentration distribution curve in sandy soil layer and effluent curve and observed those. So, this model seems to be advanced one for evaluating migration of 137 Cs in sandy soil layer with silt. (author)

  8. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    Science.gov (United States)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  9. Assessment of AquaCrop model in the simulation of durum wheat (Triticum aestivum L. growth and yield under different water regimes in Tadla- Morocco

    Directory of Open Access Journals (Sweden)

    Bassou BOUAZZAM

    2017-09-01

    Full Text Available Simulation models that clarify the effects of water on crop yield are useful tools for improving farm level water management and optimizing water use efficiency. In this study, AquaCrop was evaluated for Karim genotype which is the main durum winter wheat (Triticum aestivum L. practiced in Tadla. AquaCrop is based on the water-driven growth module, in that transpiration is converted into biomass through a water productivity parameter. The model was calibrated on data from a full irrigation treatment in 2014/15 and validated on other stressed and unstressed treatments including rain-fed conditions in 2014/15 and 2015/16. Results showed that the model provided excellent simulations of canopy cover, biomass and grain yield. Overall, the relationship between observed and modeled wheat grain yield for all treatments combined produced an R2 of 0.79, a mean squared error of 1.01 t ha-1 and an efficiency coefficient of 0.68. The model satisfactory predicted the trend of soil water reserve. Consequently, AquaCrop can be a valuable tool for simulating wheat grain yield in Tadla plain, particularly considering the fact that the model requires a relatively small number of input data. However, the performance of the model has to be fine-tuned under a wider range of conditions.

  10. Modeling and Simulation of Low Voltage Arcs

    NARCIS (Netherlands)

    Ghezzi, L.; Balestrero, A.

    2010-01-01

    Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical

  11. Repository simulation model: Final report

    International Nuclear Information System (INIS)

    1988-03-01

    This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs

  12. Fining of Red Wine Monitored by Multiple Light Scattering.

    Science.gov (United States)

    Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo

    2017-07-12

    This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.

  13. Stochastic models to simulate paratuberculosis in dairy herds

    DEFF Research Database (Denmark)

    Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad

    2011-01-01

    Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...

  14. An efficient modeling method for thermal stratification simulation in a BWR suppression pool

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang; Hua Li; Walter Villanueva; Pavel Kudinov

    2012-09-01

    The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safety analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.

  15. Modelling and simulation of superalloys. Book of abstracts

    Energy Technology Data Exchange (ETDEWEB)

    Rogal, Jutta; Hammerschmidt, Thomas; Drautz, Ralf (eds.)

    2014-07-01

    Superalloys are multi-component materials with complex microstructures that offer unique properties for high-temperature applications. The complexity of the superalloy materials makes it particularly challenging to obtain fundamental insight into their behaviour from the atomic structure to turbine blades. Recent advances in modelling and simulation of superalloys contribute to a better understanding and prediction of materials properties and therefore offer guidance for the development of new alloys. This workshop will give an overview of recent progress in modelling and simulation of materials for superalloys, with a focus on single crystal Ni-base and Co-base alloys. Topics will include electronic structure methods, atomistic simulations, microstructure modelling and modelling of microstructural evolution, solidification and process simulation as well as the modelling of phase stability and thermodynamics.

  16. Analyses of fine paste ceramics

    International Nuclear Information System (INIS)

    Sabloff, J.A.

    1980-01-01

    Four chapters are included: history of Brookhaven fine paste ceramics project, chemical and mathematical procedures employed in Mayan fine paste ceramics project, and compositional and archaeological perspectives on the Mayan fine paste ceramics

  17. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  18. Analyses of fine paste ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Sabloff, J A [ed.

    1980-01-01

    Four chapters are included: history of Brookhaven fine paste ceramics project, chemical and mathematical procedures employed in Mayan fine paste ceramics project, and compositional and archaeological perspectives on the Mayan fine paste ceramics. (DLC)

  19. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  20. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    Science.gov (United States)

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  1. Research on the fundamental process of thermal-hydraulic behaviors in severe accident. Behavior of fine droplet flow. JAERI's nuclear research promotion program, H10-027-7. Contract research

    International Nuclear Information System (INIS)

    Kataoka, Isao; Yoshida, Kenji; Matsuura, Keizo

    2002-03-01

    Analytical and experimental researches were carried out on the behavior of fine droplet flow in relation to the fundamental phenomena of thermohydraulics in severe accident. Simulation program of fine droplet behavior in turbulent gas flow was developed based on the eddy interaction model with improvement of Graham's stochastic model on eddy lifetime and eddy size. Furthermore, the developed program are capable of simulating the droplet behavior in annular dispersed flow based on the models of droplet entrainment from liquid film and turbulence modification of gas phase by liquid film. This program was confirmed by the various experimental data on droplet diffusion, deposition. Furthermore, this program was applied to the three dimensional droplet flow with the satisfactory agreement of experimental data. This means the developed program can be used as a simulation program for analysis of severe accident. Experimental research was carried out on the effect of liquid film on the turbulence field of gas flow in annular and annular dispersed flow. Averaged and turbulent velocity of gas phase were measured under various gas and liquid film flow rates. Turbulent velocity of gas phase in annular flow increased compared with single phase gas flow. This is due to turbulence generation by waves in liquid film. Corresponding to the turbulence modification by liquid film, distribution of averaged velocity of gas phase became flattened compared with single phase gas flow. (author)

  2. The potential of space exploration for the fine arts

    Science.gov (United States)

    Mclaughlin, William I.

    1993-01-01

    Art provides an integrating function between the 'upper' and 'lower' centers of the human psyche. The nature of this function can be made more specific through the triune model of the brain. The evolution of the fine arts - painting, drawing, architecture, sculpture, literature, music, dance, and drama, plus cinema and mathematics-as-a-fine-art - are examined in the context of their probable stimulations by space exploration: near term and long term.

  3. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  4. A global Fine-Root Ecology Database to address below-ground challenges in plant ecology.

    Science.gov (United States)

    Iversen, Colleen M; McCormack, M Luke; Powell, A Shafer; Blackwood, Christopher B; Freschet, Grégoire T; Kattge, Jens; Roumet, Catherine; Stover, Daniel B; Soudzilovskaia, Nadejda A; Valverde-Barrantes, Oscar J; van Bodegom, Peter M; Violle, Cyrille

    2017-07-01

    Variation and tradeoffs within and among plant traits are increasingly being harnessed by empiricists and modelers to understand and predict ecosystem processes under changing environmental conditions. While fine roots play an important role in ecosystem functioning, fine-root traits are underrepresented in global trait databases. This has hindered efforts to analyze fine-root trait variation and link it with plant function and environmental conditions at a global scale. This Viewpoint addresses the need for a centralized fine-root trait database, and introduces the Fine-Root Ecology Database (FRED, http://roots.ornl.gov) which so far includes > 70 000 observations encompassing a broad range of root traits and also includes associated environmental data. FRED represents a critical step toward improving our understanding of below-ground plant ecology. For example, FRED facilitates the quantification of variation in fine-root traits across root orders, species, biomes, and environmental gradients while also providing a platform for assessments of covariation among root, leaf, and wood traits, the role of fine roots in ecosystem functioning, and the representation of fine roots in terrestrial biosphere models. Continued input of observations into FRED to fill gaps in trait coverage will improve our understanding of changes in fine-root traits across space and time. © 2017 UT-Battelle LLC. New Phytologist © 2017 New Phytologist Trust.

  5. Developing Cognitive Models for Social Simulation from Survey Data

    Science.gov (United States)

    Alt, Jonathan K.; Lieberman, Stephen

    The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.

  6. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  7. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  8. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  9. Indirect estimation of absorption properties for fine aerosol particles using AATSR observations: a case study of wildfires in Russia in 2010

    Science.gov (United States)

    Rodriguez, E.; Kolmonen, P.; Virtanen, T. H.; Sogacheva, L.; Sundstrom, A.-M.; de Leeuw, G.

    2015-08-01

    The Advanced Along-Track Scanning Radiometer (AATSR) on board the ENVISAT satellite is used to study aerosol properties. The retrieval of aerosol properties from satellite data is based on the optimized fit of simulated and measured reflectances at the top of the atmosphere (TOA). The simulations are made using a radiative transfer model with a variety of representative aerosol properties. The retrieval process utilizes a combination of four aerosol components, each of which is defined by their (lognormal) size distribution and a complex refractive index: a weakly and a strongly absorbing fine-mode component, coarse mode sea salt aerosol and coarse mode desert dust aerosol). These components are externally mixed to provide the aerosol model which in turn is used to calculate the aerosol optical depth (AOD). In the AATSR aerosol retrieval algorithm, the mixing of these components is decided by minimizing the error function given by the sum of the differences between measured and calculated path radiances at 3-4 wavelengths, where the path radiances are varied by varying the aerosol component mixing ratios. The continuous variation of the fine-mode components allows for the continuous variation of the fine-mode aerosol absorption. Assuming that the correct aerosol model (i.e. the correct mixing fractions of the four components) is selected during the retrieval process, also other aerosol properties could be computed such as the single scattering albedo (SSA). Implications of this assumption regarding the ratio of the weakly/strongly absorbing fine-mode fraction are investigated in this paper by evaluating the validity of the SSA thus obtained. The SSA is indirectly estimated for aerosol plumes with moderate-to-high AOD resulting from wildfires in Russia in the summer of 2010. Together with the AOD, the SSA provides the aerosol absorbing optical depth (AAOD). The results are compared with AERONET data, i.e. AOD level 2.0 and SSA and AAOD inversion products. The RMSE

  10. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  11. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    Science.gov (United States)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  12. A Companion Model Approach to Modelling and Simulation of Industrial Processes

    International Nuclear Information System (INIS)

    Juslin, K.

    2005-09-01

    Modelling and simulation provides for huge possibilities if broadly taken up by engineers as a working method. However, when considering the launching of modelling and simulation tools in an engineering design project, they shall be easy to learn and use. Then, there is no time to write equations, to consult suppliers' experts, or to manually transfer data from one tool to another. The answer seems to be in the integration of easy to use and dependable simulation software with engineering tools. Accordingly, the modelling and simulation software shall accept as input such structured design information on industrial unit processes and their connections, as provided for by e.g. CAD software and product databases. The software technology, including required specification and communication standards, is already available. Internet based service repositories make it possible for equipment manufacturers to supply 'extended products', including such design data as needed by engineers engaged in process and automation integration. There is a market niche evolving for simulation service centres, operating in co-operation with project consultants, equipment manufacturers, process integrators, automation designers, plant operating personnel, and maintenance centres. The companion model approach for specification and solution of process simulation models, as presented herein, is developed from the above premises. The focus is on how to tackle real world processes, which from the modelling point of view are heterogeneous, dynamic, very stiff, very nonlinear and only piece vice continuous, without extensive manual interventions of human experts. An additional challenge, to solve the arising equations fast and reliable, is dealt with, as well. (orig.)

  13. Macro Level Simulation Model Of Space Shuttle Processing

    Science.gov (United States)

    2000-01-01

    The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.

  14. Nuclear reactor core modelling in multifunctional simulators

    International Nuclear Information System (INIS)

    Puska, E.K.

    1999-01-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  15. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  16. Multiscale Lattice Boltzmann method for flow simulations in highly heterogenous porous media

    KAUST Repository

    Li, Jun

    2013-01-01

    A lattice Boltzmann method (LBM) for flow simulations in highly heterogeneous porous media at both pore and Darcy scales is proposed in the paper. In the pore scale simulations, flow of two phases (e.g., oil and gas) or two immiscible fluids (e.g., water and oil) are modeled using cohesive or repulsive forces, respectively. The relative permeability can be computed using pore-scale simulations and seamlessly applied for intermediate and Darcy-scale simulations. A multiscale LBM that can reduce the computational complexity of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with the averaged results obtained using fine grid.

  17. Scalar sector extensions and the Higgs mass fine-tuning problem

    International Nuclear Information System (INIS)

    Chakraborty, Indrani

    2014-01-01

    One of the ways to address the fine-tuning problem in the Standard Model is to assume the existence of some symmetry which keeps the quantum corrections to the Higgs mass to a manageable level. This condition, known after Veltman who first propounded it, is unfortunately not satisfied in the SM, given that we know all the masses. We discuss how one can get back the Veltman Condition if one or more gauge singlet scalars are introduced in the model. We show that the most favored solution is the case where the singlet scalar does not mix with the SM doublet, and thus can act as a viable cold dark matter candidate. Furthermore, the fine-tuning problem of the new scalars necessitates the introduction of vector like fermions. Thus, singlet scalar(s) and vector fermions are minimal enhancements over the Standard Model to alleviate the fine-tuning problem. We also show that the model predicts Landau poles for all the scalar couplings, whose positions depend only on the number of such singlets. Thus, introduction of some new physics at that scale becomes inevitable. We also discuss how the model confronts the LHC constraints and the latest XENON100 data. Some more such extensions, with higher scalar multiplets, are also discussed. (author)

  18. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  19. Efficiency of Micro-fine Cement Grouting in Liquefiable Sand

    International Nuclear Information System (INIS)

    Mirjalili, Mojtaba; Mirdamadi, Alireza; Ahmadi, Alireza

    2008-01-01

    In the presence of strong ground motion, liquefaction hazards are likely to occur in saturated cohesion-less soils. The risk of liquefaction and subsequent deformation can be reduced by various ground improvement methods including the cement grouting technique. The grouting method was proposed for non-disruptive mitigation of liquefaction risk at developed sites susceptible to liquefaction. In this research, a large-scale experiment was developed for assessment of micro-fine cement grouting effect on strength behavior and liquefaction potential of loose sand. Loose sand samples treated with micro-fine grout in multidirectional experimental model, were tested under cyclic and monotonic triaxial loading to investigate the influence of micro-fine grout on the deformation properties and pore pressure response. The behavior of pure sand was compared with the behavior of sand grouted with a micro-fine cement grout. The test results were shown that cement grouting with low concentrations significantly decreased the liquefaction potential of loose sand and related ground deformation

  20. Coherent fine scale eddies in turbulence transition of spatially-developing mixing layer

    International Nuclear Information System (INIS)

    Wang, Y.; Tanahashi, M.; Miyauchi, T.

    2007-01-01

    To investigate the relationship between characteristics of the coherent fine scale eddy and a laminar-turbulent transition, a direct numerical simulation (DNS) of a spatially-developing turbulent mixing layer with Re ω,0 = 700 was conducted. On the onset of the transition, strong coherent fine scale eddies appears in the mixing layer. The most expected value of maximum azimuthal velocity of the eddy is 2.0 times Kolmogorov velocity (u k ), and decreases to 1.2u k , which is an asymptotic value in the fully-developed state, through the transition. The energy dissipation rate around the eddy is twice as high compared with that in the fully-developed state. However, the most expected diameter and eigenvalues ratio of strain rate acting on the coherent fine scale eddy are maintained to be 8 times Kolmogorov length (η) and α:β:γ = -5:1:4 in the transition process. In addition to Kelvin-Helmholtz rollers, rib structures do not disappear in the transition process and are composed of lots of coherent fine scale eddies in the fully-developed state instead of a single eddy observed in early stage of the transition or in laminar flow

  1. Evolution of Fine-Grained Channel Margin Deposits behind Large Woody Debris in an Experimental Gravel-Bed Flume

    Science.gov (United States)

    ONeill, B.; Marks, S.; Skalak, K.; Puleo, J. A.; Wilcock, P. R.; Pizzuto, J. E.

    2014-12-01

    Fine grained channel margin (FGCM) deposits of the South River, Virginia sequester a substantial volume of fine-grained sediment behind large woody debris (LWD). FGCM deposits were created in a laboratory setting meant to simulate the South River environment using a recirculating flume (15m long by 0.6m wide) with a fixed gravel bed and adjustable slope (set to 0.0067) to determine how fine sediment is transported and deposited behind LWD. Two model LWD structures were placed 3.7 m apart on opposite sides of the flume. A wire mesh screen with attached wooden dowels simulated LWD with an upstream facing rootwad. Six experiments with three different discharge rates, each with low and high sediment concentrations, were run. Suspended sediment was very fine grained (median grain size of 3 phi) and well sorted (0.45 phi) sand. Upstream of the wood, water depths averaged about 0.08m, velocities averaged about 0.3 m/s, and Froude numbers averaged around 0.3. Downstream of the first LWD structure, velocities were reduced tenfold. Small amounts of sediment passed through the rootwad and fell out of suspension in the area of reduced flow behind LWD, but most of the sediment was carried around the LWD by the main flow and then behind the LWD by a recirculating eddy current. Upstream migrating dunes formed behind LWD due to recirculating flow, similar to reattachment bars documented in bedrock canyon rivers partially obstructed by debouching debris fans. These upstream migrating dunes began at the reattachment point and merged with deposits formed from sediment transported through the rootwad. Downstream migrating dunes formed along the channel margin behind the LWD, downstream of the reattachment point. FGCM deposits were about 3 m long, with average widths of about 0.8 m. Greater sediment concentration created thicker FGCM deposits, and higher flows eroded the sides of the deposits, reducing their widths.

  2. A 2D Micromodel Study of Fines Migration and Clogging Behavior in Porous Media: Implications of Fines on Methane Extraction from Hydrate-Bearing Sediments

    Science.gov (United States)

    Cao, S. C.; Jang, J.; Waite, W. F.; Jafari, M.; Jung, J.

    2017-12-01

    Fine-grained sediment, or "fines," exist nearly ubiquitously in natural sediment, even in the predominantly coarse-grained sediments that host gas hydrates. Fines within these sandy sediments can play a crucial role during gas hydrate production activities. During methane extraction, several processes can alter the mobility and clogging potential of fines: 1) fluid flow as the formation is depressurized to release methane from hydrate; 2) pore-fluid chemistry shifts as pore-fluid brine freshens due to pure water released from dissociating hydrate; 3) the presence of a moving gas/water interface as gas evolves from dissociating hydrate and moves through the reservoir toward the production well. To evaluate fines migration and clogging behavior changes resulting from methane gas production and pore-water freshening during hydrate dissociation, 2D micromodel experiments have been conducted on a selection of pure fines, pore-fluids, and micromodel pore-throat sizes. Additionally, tests have been run with and without an invading gas phase (CO2) to test the significance of a moving meniscus on fines mobility and clogging. The endmember fine particles chosen for this research include silica silt, mica, calcium carbonate, diatoms, kaolinite, illite, and bentonite (primarily made of montmorillonite). The pore fluids include deionized water, sodium chloride brine (2M concentration), and kerosene. The microfluidic pore models, used as porous media analogs, were fabricated with pore-throat widths of 40, 60, and 100 µm. Results from this research show that in addition to the expected dependence of clogging on the ratio of particle-to-pore-throat size, pore-fluid chemistry is also a significant factor because the interaction between a particular type of fine and pore fluid influences that fine's capacity to cluster, clump together and effectively increase its particle "size" relative to the pore-throat width. The presence of a moving gas/fluid meniscus increases the clogging

  3. Sahara Coal: the fine art of collecting fines for profit

    Energy Technology Data Exchange (ETDEWEB)

    Schreckengost, D.; Arnold, D.

    1984-09-01

    Because of a change in underground mining methods that caused a considerable increase in the amount of fine sizes in the raw coal, Sahara Coal Co. designed and constructed a unique and simple fine coal system at their Harrisburg, IL prep plant. Before the new system was built, the overload of the fine coal circuit created a cost crunch due to loss of salable coal to slurry ponds, slurry pond cleaning costs, and operating and maintenance costs--each and every one excessive. Motivated by these problems, Sahara designed a prototype system to dewater the minus 28 mesh refuse. The success of the idea permitted fine refuse to be loaded onto the coarse refuse belt. Sahara also realized a large reduction in pond cleaning costs. After a period of testing, an expanded version of the refuse system was installed to dewater and dry the 28 mesh X 0 clean coal. Clean coal output increased about 30 tph. Cost savings justified the expenditures for the refuse and clean coal systems. These benefits, combined with increased coal sales revenue, paid back the project costs in less than a year.

  4. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  5. FY 1999 project on the development of new industry support type international standards. Standardization of a testing/evaluation method of biological use fine ceramics; 1999 nendo shinki sangyo shiengata kokusai hyojun kaihatsu jigyo seika hokokusho. Seitaiyo fine ceramics no shiken hyoka hoho no hyojunka

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    For the purpose of standardizing/international standardizing an evaluation method of the characteristics required for biological use fine ceramics, and the FY 1999 results were summed up. In the study of characteristics of biological use fine ceramic materials, it was confirmed that zirconia ceramics are more excellent than alumina ceramics in static strength, repeated loads and fatigue properties in the atmospheric air at room temperature. In the study of the evaluation method of biological affinity, the standardization of the simulated body liquid preparation process was studied, and the simulated body liquid was prepared. To evaluate the bioactivity of biological use fine ceramics without making animal experiments, the simulated body liquid in which the ion concentration was made exactly equal to that of human being was prepared using 2-hydroxyethyl-1-piperazinyl ethane sulfonic acid as buffer. There were seen no changes in ion concentration for four weeks at longest as long as this liquid is kept in airtight container at temperature of 36.5 degrees C or below. The present situation of the standardization of bioceramics was surveyed. (NEDO)

  6. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  7. Surgical ergonomics. Analysis of technical skills, simulation models and assessment methods.

    Science.gov (United States)

    Papaspyros, Sotiris C; Kar, Ashok; O'Regan, David

    2015-06-01

    Over the past two centuries the surgical profession has undergone a profound evolution in terms of efficiency and outcomes. Societal concerns in relation to quality assurance, patient safety and cost reduction have highlighted the issue of training expert surgeons. The core elements of a training model build on the basic foundations of gross and fine motor skills. In this paper we provide an analysis of the ergonomic principles involved and propose relevant training techniques. We have endeavored to provide both the trainer and trainee perspectives. This paper is structured into four sections: 1) Pre-operative preparation issues, 2) technical skills and instrument handling, 3) low fidelity simulation models and 4) discussion of current concepts in crew resource management, deliberate practice and assessment. Rehearsal, warm-up and motivation-enhancing techniques aid concentration and focus. Appropriate posture, comprehension of ergonomic principles in relation to surgical instruments and utilisation of the non-dominant hand are essential skills to master. Low fidelity models can be used to achieve significant progress through the early stages of the learning curve. Deliberate practice and innate ability are complementary to each other and may be considered useful adjuncts to surgical skills development. Safe medical care requires that complex patient interventions be performed by highly skilled operators supported by reliable teams. Surgical ergonomics lie at the heart of any training model that aims to produce professionals able to function as leaders of a patient safety oriented culture. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.

  8. Simulation of trace metals and PAH atmospheric pollution over Greater Paris: Concentrations and deposition on urban surfaces

    Science.gov (United States)

    Thouron, L.; Seigneur, C.; Kim, Y.; Legorgeu, C.; Roustan, Y.; Bruge, B.

    2017-10-01

    Urban areas can be subject not only to poor air quality, but also to contamination of other environmental media by air pollutants. Here, we address the potential transfer of selected air pollutants (two metals and three PAH) to urban surfaces. To that end, we simulate meteorology and air pollution from Europe to a Paris suburban neighborhood, using a four-level one-way nesting approach. The meteorological and air quality simulations use urban canopy sub-models in order to better represent the effect of the urban morphology on the air flow, atmospheric dispersion, and deposition of air pollutants to urban surfaces. This modeling approach allows us to distinguish air pollutant deposition among various urban surfaces (roofs, roads, and walls). Meteorological model performance is satisfactory, showing improved results compared to earlier simulations, although precipitation amounts are underestimated. Concentration simulation results are also satisfactory for both metals, with a fractional bias Paris region. The model simulation results suggest that both wet and dry deposition processes need to be considered when estimating the transfer of air pollutants to other environmental media. Dry deposition fluxes to various urban surfaces are mostly uniform for PAH, which are entirely present in fine particles. However, there is significantly less wall deposition compared to deposition to roofs and roads for trace metals, due to their coarse fraction. Meteorology, particle size distribution, and urban morphology are all important factors affecting air pollutant deposition. Future work should focus on the collection of data suitable to evaluate the performance of atmospheric models for both wet and dry deposition with fine spatial resolution.

  9. Proceedings of the 17. IASTED international conference on modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wamkeue, R. (comp.) [Quebec Univ., Abitibi-Temiscaminque, PQ (Canada)

    2006-07-01

    The International Association of Science and Technology for Development (IASTED) hosted this conference to provide a forum for international researchers and practitioners interested in all areas of modelling and simulation. The conference featured 12 sessions entitled: (1) automation, control and robotics, (2) hydraulic and hydrologic modelling, (3) applications in processes and design optimization, (4) environmental systems, (5) biomedicine and biomechanics, (6) communications, computers and informatics 1, (7) economics, management and operations research 1, (8) modelling and simulation methodologies 1, (9) economics, management and operations research 2, (10) modelling, optimization, identification and simulation, (11) communications, computers and informatics 2, and, (12) modelling and simulation methodologies 2. Participants took the opportunity to present the latest research, results, and ideas in mathematical modelling; physically-based modelling; agent-based modelling; dynamic modelling; 3-dimensional modelling; computational geometry; time series analysis; finite element methods; discrete event simulation; web-based simulation; Monte Carlo simulation; simulation optimization; simulation uncertainty; fuzzy systems; data modelling; computer aided design; and, visualization. Case studies in engineering design were also presented along with simulation tools and languages. The conference also highlighted topical issues in environmental systems modelling such as air modelling and simulation, atmospheric modelling, hazardous materials, mobile source emissions, ecosystem modelling, hydrological modelling, aquatic ecosystems, terrestrial ecosystems, biological systems, agricultural modelling, terrain analysis, meteorological modelling, earth system modelling, climatic modelling, and natural resource management. The conference featured 110 presentations, of which 3 have been catalogued separately for inclusion in this database. refs., tabs., figs.

  10. Fine-grained vehicle type recognition based on deep convolution neural networks

    Directory of Open Access Journals (Sweden)

    Hongcai CHEN

    2017-12-01

    Full Text Available Public security and traffic department put forward higher requirements for real-time performance and accuracy of vehicle type recognition in complex traffic scenes. Aiming at the problems of great plice forces occupation, low retrieval efficiency, and lacking of intelligence for dealing with false license, fake plate vehicles and vehicles without plates, this paper proposes a vehicle type fine-grained recognition method based GoogleNet deep convolution neural networks. The filter size and numbers of convolution neural network are designed, the activation function and vehicle type classifier are optimally selected, and a new network framework is constructed for vehicle type fine-grained recognition. The experimental results show that the proposed method has 97% accuracy for vehicle type fine-grained recognition and has greater improvement than the original GoogleNet model. Moreover, the new model effectively reduces the number of training parameters, and saves computer memory. Fine-grained vehicle type recognition can be used in intelligent traffic management area, and has important theoretical research value and practical significance.

  11. Modeling and Simulation of Nanoindentation

    Science.gov (United States)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  12. A 3D SPM model for biogeochemical modelling, with application to the northwest European continental shelf

    NARCIS (Netherlands)

    van der Molen, J.; Ruardij, P.; Greenwood, N.

    2017-01-01

    An SPM resuspension method was developed for use in 3D coupled hydrodynamics-biogeochemistry models to feed into simulations of the under-water light climate and and primary production. The method uses a single mineral fine SPM component for computational efficiency, with a concentration-dependent

  13. Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2002-01-01

    A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the x-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom

  14. Alpine Ecohydrology Across Scales: Propagating Fine-scale Heterogeneity to the Catchment and Beyond

    Science.gov (United States)

    Mastrotheodoros, T.; Pappas, C.; Molnar, P.; Burlando, P.; Hadjidoukas, P.; Fatichi, S.

    2017-12-01

    In mountainous ecosystems, complex topography and landscape heterogeneity govern ecohydrological states and fluxes. Here, we investigate topographic controls on water, energy and carbon fluxes across different climatic regimes and vegetation types representative of the European Alps. We use an ecohydrological model to perform fine-scale numerical experiments on a synthetic domain that comprises a symmetric mountain with eight catchments draining along the cardinal and intercardinal directions. Distributed meteorological model input variables are generated using observations from Switzerland. The model computes the incoming solar radiation based on the local topography. We implement a multivariate statistical framework to disentangle the impact of landscape heterogeneity (i.e., elevation, aspect, flow contributing area, vegetation type) on the simulated water, carbon, and energy dynamics. This allows us to identify the sensitivities of several ecohydrological variables (including leaf area index, evapotranspiration, snow-cover and net primary productivity) to topographic and meteorological inputs at different spatial and temporal scales. We also use an alpine catchment as a real case study to investigate how the natural variability of soil and land cover affects the idealized relationships that arise from the synthetic domain. In accordance with previous studies, our analysis shows a complex pattern of vegetation response to radiation. We find also different patterns of ecosystem sensitivity to topography-driven heterogeneity depending on the hydrological regime (i.e., wet vs. dry conditions). Our results suggest that topography-driven variability in ecohydrological variables (e.g. transpiration) at the fine spatial scale can exceed 50%, but it is substantially reduced ( 5%) when integrated at the catchment scale.

  15. COMPARISON OF RF CAVITY TRANSPORT MODELS FOR BBU SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Ilkyoung Shin,Byung Yunn,Todd Satogata,Shahid Ahmed

    2011-03-01

    The transverse focusing effect in RF cavities plays a considerable role in beam dynamics for low-energy beamline sections and can contribute to beam breakup (BBU) instability. The purpose of this analysis is to examine RF cavity models in simulation codes which will be used for BBU experiments at Jefferson Lab and improve BBU simulation results. We review two RF cavity models in the simulation codes elegant and TDBBU (a BBU simulation code developed at Jefferson Lab). elegant can include the Rosenzweig-Serafini (R-S) model for the RF focusing effect. Whereas TDBBU uses a model from the code TRANSPORT which considers the adiabatic damping effect, but not the RF focusing effect. Quantitative comparisons are discussed for the CEBAF beamline. We also compare the R-S model with the results from numerical simulations for a CEBAF-type 5-cell superconducting cavity to validate the use of the R-S model as an improved low-energy RF cavity transport model in TDBBU. We have implemented the R-S model in TDBBU. It will improve BBU simulation results to be more matched with analytic calculations and experimental results.

  16. Comparison Of RF Cavity Transport Models For BBU Simulations

    International Nuclear Information System (INIS)

    Shin, Ilkyoung; Yunn, Byung; Satogata, Todd; Ahmed, Shahid

    2011-01-01

    The transverse focusing effect in RF cavities plays a considerable role in beam dynamics for low-energy beamline sections and can contribute to beam breakup (BBU) instability. The purpose of this analysis is to examine RF cavity models in simulation codes which will be used for BBU experiments at Jefferson Lab and improve BBU simulation results. We review two RF cavity models in the simulation codes elegant and TDBBU (a BBU simulation code developed at Jefferson Lab). elegant can include the Rosenzweig-Serafini (R-S) model for the RF focusing effect. Whereas TDBBU uses a model from the code TRANSPORT which considers the adiabatic damping effect, but not the RF focusing effect. Quantitative comparisons are discussed for the CEBAF beamline. We also compare the R-S model with the results from numerical simulations for a CEBAF-type 5-cell superconducting cavity to validate the use of the R-S model as an improved low-energy RF cavity transport model in TDBBU. We have implemented the R-S model in TDBBU. It will improve BBU simulation results to be more matched with analytic calculations and experimental results.

  17. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  18. Assessment of the Weather Research and Forecasting (WRF) model for simulation of extreme rainfall events in the upper Ganga Basin

    Science.gov (United States)

    Chawla, Ila; Osuri, Krishna K.; Mujumdar, Pradeep P.; Niyogi, Dev

    2018-02-01

    Reliable estimates of extreme rainfall events are necessary for an accurate prediction of floods. Most of the global rainfall products are available at a coarse resolution, rendering them less desirable for extreme rainfall analysis. Therefore, regional mesoscale models such as the advanced research version of the Weather Research and Forecasting (WRF) model are often used to provide rainfall estimates at fine grid spacing. Modelling heavy rainfall events is an enduring challenge, as such events depend on multi-scale interactions, and the model configurations such as grid spacing, physical parameterization and initialization. With this background, the WRF model is implemented in this study to investigate the impact of different processes on extreme rainfall simulation, by considering a representative event that occurred during 15-18 June 2013 over the Ganga Basin in India, which is located at the foothills of the Himalayas. This event is simulated with ensembles involving four different microphysics (MP), two cumulus (CU) parameterizations, two planetary boundary layers (PBLs) and two land surface physics options, as well as different resolutions (grid spacing) within the WRF model. The simulated rainfall is evaluated against the observations from 18 rain gauges and the Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis (TMPA) 3B42RT version 7 data. From the analysis, it should be noted that the choice of MP scheme influences the spatial pattern of rainfall, while the choice of PBL and CU parameterizations influences the magnitude of rainfall in the model simulations. Further, the WRF run with Goddard MP, Mellor-Yamada-Janjic PBL and Betts-Miller-Janjic CU scheme is found to perform best in simulating this heavy rain event. The selected configuration is evaluated for several heavy to extremely heavy rainfall events that occurred across different months of the monsoon season in the region. The model performance improved through incorporation

  19. A Simulation Model Articulation of the REA Ontology

    Science.gov (United States)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  20. An adaptive nonlinear solution scheme for reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lett, G.S. [Scientific Software - Intercomp, Inc., Denver, CO (United States)

    1996-12-31

    Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.

  1. Modeling and simulation of chillers with Dymola/Modelica; Modellierung und Simulation von Kaeltemaschinen mit Dymola/Modelica

    Energy Technology Data Exchange (ETDEWEB)

    Rettich, Daniel [Hochschule Biberach (Germany). Inst. fuer Gebaeude- und Energiesysteme (IGE)

    2012-07-01

    Within the contribution under consideration, a chiller was modeled and simulated with the program package Dymola / Modelica using the TIL Toolbox. An existing refrigeration technology test bench at the University of Biberach (Federal Republic of Germany) serves as a reference for the chiller illustrated in the simulation. The aim of the simulation is the future use of the models in a hardware-in-the-Loop (HIL) test bench in order to test different controllers with respect to their function and logic under identical framework conditions. Furthermore, the determination of the energy efficiency according to the regulation VDMA 24247 is in the foreground at the test bench as well as within the simulation. Following the final completion of the test bench, the models are validated against the test bench, and the model of the refrigerator will be connected to a detailed space model. Individual models were taken from the TIL toolbox, adapted for the application and parameterized with the design values of the laboratory chiller. Modifications to the TIL models were necessary in order to reflect the dynamic effects of the chiller in detail. For this purpose, investigations on indicators of the various dynamic components were employed. Subsequently to the modeling, each model was tested on the bases of design values and documents of the manufacturer. First simulation studies showed that the simulation in Dymola including the developed models provide plausible results. In the course of the modeling and parameterization of these modified models a component library was developed. Different models for future simulation studies can be extracted.

  2. A virtual laboratory notebook for simulation models.

    Science.gov (United States)

    Winfield, A J

    1998-01-01

    In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.

  3. Turbulence modeling for Francis turbine water passages simulation

    International Nuclear Information System (INIS)

    Maruzewski, P; Munch, C; Mombelli, H P; Avellan, F; Hayashi, H; Yamaishi, K; Hashii, T; Sugow, Y

    2010-01-01

    The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-ε model, or the standard k-ε model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.

  4. Turbulence modeling for Francis turbine water passages simulation

    Energy Technology Data Exchange (ETDEWEB)

    Maruzewski, P; Munch, C; Mombelli, H P; Avellan, F [Ecole polytechnique federale de Lausanne, Laboratory of Hydraulic Machines Avenue de Cour 33 bis, CH-1007 Lausanne (Switzerland); Hayashi, H; Yamaishi, K; Hashii, T; Sugow, Y, E-mail: pierre.maruzewski@epfl.c [Nippon KOEI Power Systems, 1-22 Doukyu, Aza, Morijyuku, Sukagawa, Fukushima Pref. 962-8508 (Japan)

    2010-08-15

    The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-{epsilon} model, or the standard k-{epsilon} model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.

  5. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.

    2005-01-01

    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  6. Turbulence modeling for Francis turbine water passages simulation

    Science.gov (United States)

    Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.

    2010-08-01

    The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.

  7. Genetic and evolutionary correlates of fine-scale recombination rate variation in Drosophila persimilis.

    Science.gov (United States)

    Stevison, Laurie S; Noor, Mohamed A F

    2010-12-01

    Recombination is fundamental to meiosis in many species and generates variation on which natural selection can act, yet fine-scale linkage maps are cumbersome to construct. We generated a fine-scale map of recombination rates across two major chromosomes in Drosophila persimilis using 181 SNP markers spanning two of five major chromosome arms. Using this map, we report significant fine-scale heterogeneity of local recombination rates. However, we also observed "recombinational neighborhoods," where adjacent intervals had similar recombination rates after excluding regions near the centromere and telomere. We further found significant positive associations of fine-scale recombination rate with repetitive element abundance and a 13-bp sequence motif known to associate with human recombination rates. We noted strong crossover interference extending 5-7 Mb from the initial crossover event. Further, we observed that fine-scale recombination rates in D. persimilis are strongly correlated with those obtained from a comparable study of its sister species, D. pseudoobscura. We documented a significant relationship between recombination rates and intron nucleotide sequence diversity within species, but no relationship between recombination rate and intron divergence between species. These results are consistent with selection models (hitchhiking and background selection) rather than mutagenic recombination models for explaining the relationship of recombination with nucleotide diversity within species. Finally, we found significant correlations between recombination rate and GC content, supporting both GC-biased gene conversion (BGC) models and selection-driven codon bias models. Overall, this genome-enabled map of fine-scale recombination rates allowed us to confirm findings of broader-scale studies and identify multiple novel features that merit further investigation.

  8. Towards a standard model for research in agent-based modeling and simulation

    Directory of Open Access Journals (Sweden)

    Nuno Fachada

    2015-11-01

    Full Text Available Agent-based modeling (ABM is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. ABMs are very sensitive to implementation details. Thus, it is very easy to inadvertently introduce changes which modify model dynamics. Such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. In this paper, we present PPHPC, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. This paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output, also providing a canonical implementation of PPHPC. The paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their ABMs.

  9. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  10. Dissection of a Complex Disease Susceptibility Region Using a Bayesian Stochastic Search Approach to Fine Mapping.

    Directory of Open Access Journals (Sweden)

    Chris Wallace

    2015-06-01

    Full Text Available Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS and type 1 diabetes (T1D associations in the IL-2RA (CD25 gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3 and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data.

  11. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  12. Fine-Grained Turbidites: Facies, Attributes and Process Implications

    Science.gov (United States)

    Stow, Dorrik; Omoniyi, Bayonle

    2016-04-01

    Within turbidite systems, fine-grained sediments are still the poor relation and sport several contrasting facies models linked to process of deposition. These are volumetrically the dominant facies in deepwater and, from a resource perspective, they form important marginal and tight reservoirs, and have great potential for unconventional shale gas, source rocks and seals. They are also significant hosts of metals and rare earth elements. Based on a large number of studies of modern, ancient and subsurface systems, including 1000s of metres of section logging, we define the principal genetic elements of fine-grained deepwater facies, present a new synthesis of facies models and their sedimentary attributes. The principal architectural elements include: non-channelised slope-aprons, channel-fill, channel levee and overbank, turbidite lobes, mass-transport deposits, contourite drifts, basin sheets and drapes. These comprise a variable intercalation of fine-grained facies - thin-bedded and very thin-bedded turbidites, contourites, hemipelagites and pelagites - and associated coarse-grained facies. Characteristic attributes used to discriminate between these different elements are: facies and facies associations; sand-shale ratio, sand and shale geometry and dimensions, sand connectivity; sediment texture and small-scale sedimentary structures; sediment fabric and microfabric; and small-scale vertical sequences of bed thickness. To some extent, we can relate facies and attribute characteristics to different depositional environments. We identify four distinct facies models: (a) silt-laminated mud turbidites, (b) siliciclastic mud turbidites, (c) carbonate mud turbidites, (d) disorganized silty-mud turbidites, and (e) hemiturbidites. Within the grainsize-velocity matrix turbidite plot, these all fall within the region of mean size < 0.063mm, maximum grainsize (one percentile) <0.2mm, and depositional velocity 0.1-0.5 m/s. Silt-laminated turbidites and many mud

  13. Seasonal variation in coastal marine habitat use by the European shag: Insights from fine scale habitat selection modeling and diet

    Science.gov (United States)

    Michelot, Candice; Pinaud, David; Fortin, Matthieu; Maes, Philippe; Callard, Benjamin; Leicher, Marine; Barbraud, Christophe

    2017-07-01

    Studies of habitat selection by higher trophic level species are necessary for using top predator species as indicators of ecosystem functioning. However, contrary to terrestrial ecosystems, few habitat selection studies have been conducted at a fine scale for coastal marine top predator species, and fewer have coupled diet data with habitat selection modeling to highlight a link between prey selection and habitat use. The aim of this study was to characterize spatially and oceanographically, at a fine scale, the habitats used by the European Shag Phalacrocorax aristotelis in the Special Protection Area (SPA) of Houat-Hœdic in the Mor Braz Bay during its foraging activity. Habitat selection models were built using in situ observation data of foraging shags (transect sampling) and spatially explicit environmental data to characterize marine benthic habitats. Observations were first adjusted for detectability biases and shag abundance was subsequently spatialized. The influence of habitat variables on shag abundance was tested using Generalized Linear Models (GLMs). Diet data were finally confronted to habitat selection models. Results showed that European shags breeding in the Mor Braz Bay changed foraging habitats according to the season and to the different environmental and energetic constraints. The proportion of the main preys also varied seasonally. Rocky and coarse sand habitats were clearly preferred compared to fine or muddy sand habitats. Shags appeared to be more selective in their foraging habitats during the breeding period and the rearing of chicks, using essentially rocky areas close to the colony and consuming preferentially fish from the Labridae family and three other fish families in lower proportions. During the post-breeding period shags used a broader range of habitats and mainly consumed Gadidae. Thus, European shags seem to adjust their feeding strategy to minimize energetic costs, to avoid intra-specific competition and to maximize access

  14. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  15. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    Science.gov (United States)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  16. Off-gas adsorption model and simulation - OSPREY

    Energy Technology Data Exchange (ETDEWEB)

    Rutledge, V.J. [Idaho National Laboratory, P. O. Box 1625, Idaho Falls, ID (United States)

    2013-07-01

    A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes is expected to provide substantial cost savings and many technical benefits. To support this capability, a modeling effort focused on the off-gas treatment system of a used nuclear fuel recycling facility is in progress. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and Recovery (OSPREY) models the adsorption of offgas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas composition, sorbent and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data can be obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. In addition to concentration data, the model predicts temperature along the column length as a function of time and pressure drop along the column length. A description of the OSPREY model, results from krypton adsorption modeling and plans for modeling the behavior of iodine, xenon, and tritium will be discussed. (author)

  17. Ultra-Fine Scale Spatially-Integrated Mapping of Habitat and Occupancy Using Structure-From-Motion.

    Directory of Open Access Journals (Sweden)

    Philip McDowall

    Full Text Available Organisms respond to and often simultaneously modify their environment. While these interactions are apparent at the landscape extent, the driving mechanisms often occur at very fine spatial scales. Structure-from-Motion (SfM, a computer vision technique, allows the simultaneous mapping of organisms and fine scale habitat, and will greatly improve our understanding of habitat suitability, ecophysiology, and the bi-directional relationship between geomorphology and habitat use. SfM can be used to create high-resolution (centimeter-scale three-dimensional (3D habitat models at low cost. These models can capture the abiotic conditions formed by terrain and simultaneously record the position of individual organisms within that terrain. While coloniality is common in seabird species, we have a poor understanding of the extent to which dense breeding aggregations are driven by fine-scale active aggregation or limited suitable habitat. We demonstrate the use of SfM for fine-scale habitat suitability by reconstructing the locations of nests in a gentoo penguin colony and fitting models that explicitly account for conspecific attraction. The resulting digital elevation models (DEMs are used as covariates in an inhomogeneous hybrid point process model. We find that gentoo penguin nest site selection is a function of the topography of the landscape, but that nests are far more aggregated than would be expected based on terrain alone, suggesting a strong role of behavioral aggregation in driving coloniality in this species. This integrated mapping of organisms and fine scale habitat will greatly improve our understanding of fine-scale habitat suitability, ecophysiology, and the complex bi-directional relationship between geomorphology and habitat use.

  18. PSH Transient Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-21

    PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.

  19. LAGRANGIAN MODELING OF A SUSPENDED-SEDIMENT PULSE.

    Science.gov (United States)

    Schoellhamer, David H.

    1987-01-01

    The one-dimensional Lagrangian Transport Model (LTM) has been applied in a quasi two-dimensional manner to simulate the transport of a slug injection of microbeads in steady experimental flows. A stationary bed segment was positioned below each parcel location to simulate temporary storage of beads on the bottom of the flume. Only one degree of freedom was available for all three bead simulations. The results show the versatility of the LTM and the ability of the LTM to accurately simulate transport of fine suspended sediment.

  20. Research on the fundamental process of thermal-hydraulic behaviors in severe accident. Behavior of fine droplet flow. JAERI's nuclear research promotion program, H10-027-7. Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Kataoka, Isao; Yoshida, Kenji [Osaka Univ., Graduate School of Engineering, Osaka (Japan); Matsuura, Keizo [Nuclear Fuel Industry, Co., Ltd., Tokyo (Japan)

    2002-03-01

    Analytical and experimental researches were carried out on the behavior of fine droplet flow in relation to the fundamental phenomena of thermohydraulics in severe accident. Simulation program of fine droplet behavior in turbulent gas flow was developed based on the eddy interaction model with improvement of Graham's stochastic model on eddy lifetime and eddy size. Furthermore, the developed program are capable of simulating the droplet behavior in annular dispersed flow based on the models of droplet entrainment from liquid film and turbulence modification of gas phase by liquid film. This program was confirmed by the various experimental data on droplet diffusion, deposition. Furthermore, this program was applied to the three dimensional droplet flow with the satisfactory agreement of experimental data. This means the developed program can be used as a simulation program for analysis of severe accident. Experimental research was carried out on the effect of liquid film on the turbulence field of gas flow in annular and annular dispersed flow. Averaged and turbulent velocity of gas phase were measured under various gas and liquid film flow rates. Turbulent velocity of gas phase in annular flow increased compared with single phase gas flow. This is due to turbulence generation by waves in liquid film. Corresponding to the turbulence modification by liquid film, distribution of averaged velocity of gas phase became flattened compared with single phase gas flow. (author)

  1. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  2. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  3. Simulation platform to model, optimize and design wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Iov, F.; Hansen, A.D.; Soerensen, P.; Blaabjerg, F.

    2004-03-01

    This report is a general overview of the results obtained in the project 'Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines'. The motivation for this research project is the ever-increasing wind energy penetration into the power network. Therefore, the project has the main goal to create a model database in different simulation tools for a system optimization of the wind turbine systems. Using this model database a simultaneous optimization of the aerodynamic, mechanical, electrical and control systems over the whole range of wind speeds and grid characteristics can be achieved. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. However, other simulation packages e.g PSCAD/EMTDC can easily be added in the simulation platform. New models and new control algorithms for wind turbine systems have been developed and tested in these tools. All these models are collected in dedicated libraries in Matlab/Simulink as well as in Saber. Some simulation results from the considered tools are presented for MW wind turbines. These simulation results focuses on fixed-speed and variable speed/pitch wind turbines. A good agreement with the real behaviour of these systems is obtained for each simulation tool. These models can easily be extended to model different kinds of wind turbines or large wind

  4. Subthreshold SPICE Model Optimization

    Science.gov (United States)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  5. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  6. Modification of Core Model for KNTC 2 Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.K.; Lee, J.G.; Park, J.E.; Bae, S.N.; Chin, H.C. [Korea Electric Power Research Institute, Taejeon (Korea, Republic of)

    1997-12-31

    KNTC 2 simulator was developed in 1986 referencing YGN 1. Since the YGN 1 has changed its fuel cycle to long term cycle(cycle 9), the data such as rod worth, boron worth, moderator temperature coefficient, and etc. of the simulator and those of the YGN 1 became different. To incorporate these changes into the simulator and make the simulator more close to the reference plant, core model upgrade became a necessity. During this research, core data for the simulator was newly generated using APA of the WH. And to make it easy tuning and verification of the key characteristics of the reactor model, PC-Based tool was also developed. And to facilitate later core model upgrade, two procedures-`the Procedures for core characteristic generation` and `the Procedures for core characteristic modification`-were also developed. (author). 16 refs., 22 figs., 1 tab.

  7. Modeling and simulation of the SDC data collection chip

    International Nuclear Information System (INIS)

    Hughes, E.; Haney, M.; Golin, E.; Jones, L.; Knapp, D.; Tharakan, G.; Downing, R.

    1992-01-01

    This paper describes modeling and simulation of the Data Collection Chip (DCC) design for the Solenoidal Detector Collaboration (SDC). Models of the DCC written in Verilog and VHDL are described, and results are presented. The models have been simulated to study queue depth requirements and to compare control feedback alternatives. Insight into the management of models and simulation tools is given. Finally, techniques useful in the design process for data acquisition systems are discussed

  8. HVDC System Characteristics and Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)

    2001-07-01

    This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.

  9. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  10. Converting biomolecular modelling data based on an XML representation.

    Science.gov (United States)

    Sun, Yudong; McKeever, Steve

    2008-08-25

    Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  11. Converting Biomolecular Modelling Data Based on an XML Representation

    Directory of Open Access Journals (Sweden)

    Sun Yudong

    2008-06-01

    Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.

  12. A Framework for the Optimization of Discrete-Event Simulation Models

    Science.gov (United States)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  13. How fine is fine enough when doing CFD terrain simulations

    DEFF Research Database (Denmark)

    Sørensen, Niels N.; Bechmann, Andreas; Réthoré, Pierre-Elouan

    2012-01-01

    The present work addresses the problemof establishing the necessary grid resolution to obtain a given level of numerical accuracy using a CFD model for prediction of flow over terrain. It is illustrated, that a very high resolution may be needed if the numerical difference between consecutive...

  14. A reanalysis of MODIS fine mode fraction over ocean using OMI and daily GOCART simulations

    Directory of Open Access Journals (Sweden)

    T. A. Jones

    2011-06-01

    Full Text Available Using daily Goddard Chemistry Aerosol Radiation and Transport (GOCART model simulations and columnar retrievals of 0.55 μm aerosol optical thickness (AOT and fine mode fraction (FMF from the Moderate Resolution Imaging Spectroradiometer (MODIS, we estimate the satellite-derived aerosol properties over the global oceans between June 2006 and May 2007 due to black carbon (BC, organic carbon (OC, dust (DU, sea-salt (SS, and sulfate (SU components. Using Aqua-MODIS aerosol properties embedded in the CERES-SSF product, we find that the mean MODIS FMF values for each aerosol type are SS: 0.31 ± 0.09, DU: 0.49 ± 0.13, SU: 0.77 ± 0.16, and (BC + OC: 0.80 ± 0.16. We further combine information from the ultraviolet spectrum using the Ozone Monitoring Instrument (OMI onboard the Aura satellite to improve the classification process, since dust and carbonate aerosols have positive Aerosol Index (AI values >0.5 while other aerosol types have near zero values. By combining MODIS and OMI datasets, we were able to identify and remove data in the SU, OC, and BC regions that were not associated with those aerosol types.

    The same methods used to estimate aerosol size characteristics from MODIS data within the CERES-SSF product were applied to Level 2 (L2 MODIS aerosol data from both Terra and Aqua satellites for the same time period. As expected, FMF estimates from L2 Aqua data agreed well with the CERES-SSF dataset from Aqua. However, the FMF estimate for DU from Terra data was significantly lower (0.37 vs. 0.49 indicating that sensor calibration, sampling differences, and/or diurnal changes in DU aerosol size characteristics were occurring. Differences for other aerosol types were generally smaller. Sensitivity studies show that a difference of 0.1 in the estimate of the anthropogenic component of FMF produces a corresponding change of 0.2 in the anthropogenic component of AOT (assuming a unit value of AOT. This uncertainty would then be passed

  15. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  16. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  17. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  18. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  19. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  20. MRL and SuperFine+MRL: new supertree methods

    Science.gov (United States)

    2012-01-01

    Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525

  1. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  2. Re-evaluation of the Pressure Effect for Nucleation in Laminar Flow Diffusion Chamber Experiments with Fluent and the Fine Particle Model

    Czech Academy of Sciences Publication Activity Database

    Herrmann, E.; Hyvärinen, A.-P.; Brus, David; Lihavainen, H.; Kulmala, M.

    2009-01-01

    Roč. 113, č. 8 (2009), s. 1434-1439 ISSN 1089-5639 Institutional research plan: CEZ:AV0Z40720504 Keywords : laminar flow diffusion chamber * experimental data * fine particle model Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.899, year: 2009

  3. Automobile simulation model and its identification. Behavior measuring by image processing; Jidosha simulation model to dotei jikken. Gazo kaiseki ni yoru undo no keisoku

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, H; Morita, S; Matsuura, Y [Osaka Sangyo University, Osaka (Japan)

    1997-10-01

    Model simulation technology is important for automobiles development. Especially, for the investigations concerning to ABS, TRC, VDC, and so on, the model should be the one which can simulates not only whole behaviors of the automobile, but also such internal information as torque, acceleration, and, velocity of each drive shafts, etc.. From this point of view, 4-wheels simulation model which can simulates almost over 50 items, was made. On the other hand, technique of 3-D image processing using 2 video cameras was adopted to identify the model. Considerably good coincidences were recognized between the simulated values and measured ones. 3 refs., 7 figs., 2 tabs.

  4. Calibration of a transient transport model to tritium data in streams and simulation of groundwater ages in the western Lake Taupo catchment, New Zealand

    Directory of Open Access Journals (Sweden)

    M. A. Gusyev

    2013-03-01

    Full Text Available Here we present a general approach of calibrating transient transport models to tritium concentrations in river waters developed for the MT3DMS/MODFLOW model of the western Lake Taupo catchment, New Zealand. Tritium has a known pulse-shaped input to groundwater systems due to the bomb tritium in the early 1960s and, with its radioactive half-life of 12.32 yr, allows for the determination of the groundwater age. In the transport model, the tritium input (measured in rainfall passes through the groundwater system, and the simulated tritium concentrations are matched to the measured tritium concentrations in the river and stream outlets for the Waihaha, Whanganui, Whareroa, Kuratau and Omori catchments from 2000–2007. For the Kuratau River, tritium was also measured between 1960 and 1970, which allowed us to fine-tune the transport model for the simulated bomb-peak tritium concentrations. In order to incorporate small surface water features in detail, an 80 m uniform grid cell size was selected in the steady-state MODFLOW model for the model area of 1072 km2. The groundwater flow model was first calibrated to groundwater levels and stream baseflow observations. Then, the transient tritium transport MT3DMS model was matched to the measured tritium concentrations in streams and rivers, which are the natural discharge of the groundwater system. The tritium concentrations in the rivers and streams correspond to the residence time of the water in the groundwater system (groundwater age and mixing of water with different age. The transport model output showed a good agreement with the measured tritium values. Finally, the tritium-calibrated MT3DMS model is applied to simulate groundwater ages, which are used to obtain groundwater age distributions with mean residence times (MRTs in streams and rivers for the five catchments. The effect of regional and local hydrogeology on the simulated groundwater ages is investigated by demonstrating groundwater ages

  5. Quantum Link Models and Quantum Simulation of Gauge Theories

    International Nuclear Information System (INIS)

    Wiese, U.J.

    2015-01-01

    This lecture is about Quantum Link Models and Quantum Simulation of Gauge Theories. The lecture consists out of 4 parts. The first part gives a brief history of Computing and Pioneers of Quantum Computing and Quantum Simulations of Quantum Spin Systems are introduced. The 2nd lecture is about High-Temperature Superconductors versus QCD, Wilson’s Lattice QCD and Abelian Quantum Link Models. The 3rd lecture deals with Quantum Simulators for Abelian Lattice Gauge Theories and Non-Abelian Quantum Link Models. The last part of the lecture discusses Quantum Simulators mimicking ‘Nuclear’ physics and the continuum limit of D-Theorie models. (nowak)

  6. Exact simulation of conditioned Wright-Fisher models.

    Science.gov (United States)

    Zhao, Lei; Lascoux, Martin; Waxman, David

    2014-12-21

    Forward and backward simulations play an increasing role in population genetics, in particular when inferring the relative importance of evolutionary forces. It is therefore important to develop fast and accurate simulation methods for general population genetics models. Here we present an exact simulation method that generates trajectories of an allele׳s frequency in a finite population, as described by a general Wright-Fisher model. The method generates conditioned trajectories that start from a known frequency at a known time, and which achieve a specific final frequency at a known final time. The simulation method applies irrespective of the smallness of the probability of the transition between the initial and final states, because it is not based on rejection of trajectories. We illustrate the method on several different populations where a Wright-Fisher model (or related) applies, namely (i) a locus with 2 alleles, that is subject to selection and mutation; (ii) a locus with 3 alleles, that is subject to selection; (iii) a locus in a metapopulation consisting of two subpopulations of finite size, that are subject to selection and migration. The simulation method allows the generation of conditioned trajectories that can be used for the purposes of visualisation, the estimation of summary statistics, and the development/testing of new inferential methods. The simulated trajectories provide a very simple approach to estimating quantities that cannot easily be expressed in terms of the transition matrix, and can be applied to finite Markov chains other than the Wright-Fisher model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Fine and gross motor skills: The effects on skill-focused dual-tasks.

    Science.gov (United States)

    Raisbeck, Louisa D; Diekfuss, Jed A

    2015-10-01

    Dual-task methodology often directs participants' attention towards a gross motor skill involved in the execution of a skill, but researchers have not investigated the comparative effects of attention on fine motor skill tasks. Furthermore, there is limited information about participants' subjective perception of workload with respect to task performance. To examine this, the current study administered the NASA-Task Load Index following a simulated shooting dual-task. The task required participants to stand 15 feet from a projector screen which depicted virtual targets and fire a modified Glock 17 handgun equipped with an infrared laser. Participants performed the primary shooting task alone (control), or were also instructed to focus their attention on a gross motor skill relevant to task execution (gross skill-focused) and a fine motor skill relevant to task execution (fine skill-focused). Results revealed that workload was significantly greater during the fine skill-focused task for both skill levels, but performance was only affected for the lesser-skilled participants. Shooting performance for the lesser-skilled participants was greater during the gross skill-focused condition compared to the fine skill-focused condition. Correlational analyses also demonstrated a significant negative relationship between shooting performance and workload during the gross skill-focused task for the higher-skilled participants. A discussion of the relationship between skill type, workload, skill level, and performance in dual-task paradigms is presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. An introduction to network modeling and simulation for the practicing engineer

    CERN Document Server

    Burbank, Jack; Ward, Jon

    2011-01-01

    This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.

  9. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    Science.gov (United States)

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-09-01

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  11. Modeling and simulating industrial land-use evolution in Shanghai, China

    Science.gov (United States)

    Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl

    2018-01-01

    This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.

  12. Analysis of ERA40-driven CLM simulations for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, E.B.; Luethi, D.; Schaer, C.; Seneviratne, S.I. [Inst. for Atmospheric and Climate Science, ETH Zurich (Switzerland); Anders, I.; Rockel, B. [Inst. for Coastal Research, GKSS Research Center, Geesthacht (Germany)

    2008-08-15

    The Climate Local Model (CLM) is a community Regional Climate Model (RCM) based on the COSMO weather forecast model. We present a validation of long-term ERA40-driven CLM simulations performed with different model versions. In particular we analyse three simulations with differences in boundary nudging and horizontal resolution performed for the EU-project ENSEMBLES with the model version 2.4.6, and one with the latest version 4.0. Moreover, we include for comparison a long-term simulation with the RCM CHRM previously used at ETH Zurich. We provide a thorough validation of temperature, precipitation, net radiation, cloud cover, circulation, evaporation and terrestrial water storage for winter and summer. For temperature and precipitation the interannual variability is additionally assessed. While simulations with CLM version 2.4.6 are generally too warm and dry in summer but still within the typical error of PRUDENCE simulations, version 4.0 has an anomalous cold and wet bias. This is partly due to a strong underestimation of the net radiation associated with cloud cover overestimation. Two similar CLM 2.4.6 simulations with different spatial resolutions (0.44 and 0.22 ) reveal for the analysed fields no clear benefit of the higher resolution except for better resolved fine-scale structures. While the large-scale circulation is represented more realistically with spectral nudging, temperature and precipitation are not. Overall, CLM performs comparatively to other state-of-the-art RCMs over Europe. (orig.)

  13. Dynamic models of staged gasification processes. Documentation of gasification simulator; Dynamiske modeller a f trinopdelte forgasningsprocesser. Dokumentation til forgasser simulator

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-02-15

    In connection with the ERP project 'Dynamic modelling of staged gasification processes' a gasification simulator has been constructed. The simulator consists of: a mathematical model of the gasification process developed at Technical University of Denmark, a user interface programme, IGSS, and a communication interface between the two programmes. (BA)

  14. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  15. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    Science.gov (United States)

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  16. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  17. A Grey Box Neural Network Model of Basal Ganglia for Gait Signal of Patients with Huntington Disease

    Directory of Open Access Journals (Sweden)

    Abbas Pourhedayat

    2016-04-01

    Conclusion: Fine similarity between the presented model and BG physiological structure with its high ability in simulating HD disorders, introduces this model as a powerful tool to analyze HD behavior.

  18. Modeling and Simulation of Claus Unit Reaction Furnace

    Directory of Open Access Journals (Sweden)

    Maryam Pahlavan

    2016-01-01

    Full Text Available Reaction furnace is the most important part of the Claus sulfur recovery unit and its performance has a significant impact on the process efficiency. Too many reactions happen in the furnace and their kinetics and mechanisms are not completely understood; therefore, modeling reaction furnace is difficult and several works have been carried out on in this regard so far. Equilibrium models are commonly used to simulate the furnace, but the related literature states that the outlet of furnace is not in equilibrium and the furnace reactions are controlled by kinetic laws; therefore, in this study, the reaction furnace is simulated by a kinetic model. The predicted outlet temperature and concentrations by this model are compared with experimental data published in the literature and the data obtained by PROMAX V2.0 simulator. The results show that the accuracy of the proposed kinetic model and PROMAX simulator is almost similar, but the kinetic model used in this paper has two importance abilities. Firstly, it is a distributed model and can be used to obtain the temperature and concentration profiles along the furnace. Secondly, it is a dynamic model and can be used for analyzing the transient behavior and designing the control system.

  19. Modelling, simulation and validation of the industrial robot

    Directory of Open Access Journals (Sweden)

    Aleksandrov Slobodan Č.

    2014-01-01

    Full Text Available In this paper, a DH model of industrial robot, with anthropomorphic configuration and five degrees of freedom - Mitsubishi RV2AJ, is developed. The model is verified on the example robot Mitsubishi RV2AJ. In paper detailed represented the complete mathematical model of the robot and the parameters of the programming. On the basis of this model, simulation of robot motion from point to point is performed, as well as the continuous movement of the pre-defined path. Also, programming of industrial robots identical to simulation programs is made, and comparative analysis of real and simulated experiment is shown. In the final section, a detailed analysis of robot motion is described.

  20. Constraints on eQTL Fine Mapping in the Presence of Multisite Local Regulation of Gene Expression

    Directory of Open Access Journals (Sweden)

    Biao Zeng

    2017-08-01

    Full Text Available Expression quantitative trait locus (eQTL detection has emerged as an important tool for unraveling of the relationship between genetic risk factors and disease or clinical phenotypes. Most studies use single marker linear regression to discover primary signals, followed by sequential conditional modeling to detect secondary genetic variants affecting gene expression. However, this approach assumes that functional variants are sparsely distributed and that close linkage between them has little impact on estimation of their precise location and the magnitude of effects. We describe a series of simulation studies designed to evaluate the impact of linkage disequilibrium (LD on the fine mapping of causal variants with typical eQTL effect sizes. In the presence of multisite regulation, even though between 80 and 90% of modeled eSNPs associate with normally distributed traits, up to 10% of all secondary signals could be statistical artifacts, and at least 5% but up to one-quarter of credible intervals of SNPs within r2 > 0.8 of the peak may not even include a causal site. The Bayesian methods eCAVIAR and DAP (Deterministic Approximation of Posteriors provide only modest improvement in resolution. Given the strong empirical evidence that gene expression is commonly regulated by more than one variant, we conclude that the fine mapping of causal variants needs to be adjusted for multisite influences, as conditional estimates can be highly biased by interference among linked sites, but ultimately experimental verification of individual effects is needed. Presumably similar conclusions apply not just to eQTL mapping, but to multisite influences on fine mapping of most types of quantitative trait.

  1. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    2003-01-01

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle. The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by internal dynamics, to be followed in

  2. A high resolution hydrodynamic 3-D model simulation of the malta shelf area

    Directory of Open Access Journals (Sweden)

    A. F. Drago

    Full Text Available The seasonal variability of the water masses and transport in the Malta Channel and proximity of the Maltese Islands have been simulated by a high resolution (1.6 km horizontal grid on average, 15 vertical sigma layers eddy resolving primitive equation shelf model (ROSARIO-I. The numerical simulation was run with climatological forcing and includes thermohaline dynamics with a turbulence scheme for the vertical mixing coefficients on the basis of the Princeton Ocean Model (POM. The model has been coupled by one-way nesting along three lateral boundaries (east, south and west to an intermediate coarser resolution model (5 km implemented over the Sicilian Channel area. The fields at the open boundaries and the atmospheric forcing at the air-sea interface were applied on a repeating "perpetual" year climatological cycle.

    The ability of the model to reproduce a realistic circulation of the Sicilian-Maltese shelf area has been demonstrated. The skill of the nesting procedure was tested by model-modelc omparisons showing that the major features of the coarse model flow field can be reproduced by the fine model with additional eddy space scale components. The numerical results included upwelling, mainly in summer and early autumn, along the southern coasts of Sicily and Malta; a strong eastward shelf surface flow along shore to Sicily, forming part of the Atlantic Ionian Stream, with a presence throughout the year and with significant seasonal modulation, and a westward winter intensified flow of LIW centered at a depth of around 280 m under the shelf break to the south of Malta. The seasonal variability in the thermohaline structure of the domain and the associated large-scale flow structures can be related to the current knowledge on the observed hydrography of the area. The level of mesoscale resolution achieved by the model allowed the spatial and temporal evolution of the changing flow patterns, triggered by

  3. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  4. A model analysis of climate and CO2 controls on tree growth in a semi-arid woodland

    Science.gov (United States)

    Li, G.; Harrison, S. P.; Prentice, I. C.

    2015-03-01

    We used a light-use efficiency model of photosynthesis coupled with a dynamic carbon allocation and tree-growth model to simulate annual growth of the gymnosperm Callitris columellaris in the semi-arid Great Western Woodlands, Western Australia, over the past 100 years. Parameter values were derived from independent observations except for sapwood specific respiration rate, fine-root turnover time, fine-root specific respiration rate and the ratio of fine-root mass to foliage area, which were estimated by Bayesian optimization. The model reproduced the general pattern of interannual variability in radial growth (tree-ring width), including the response to the shift in precipitation regimes that occurred in the 1960s. Simulated and observed responses to climate were consistent. Both showed a significant positive response of tree-ring width to total photosynthetically active radiation received and to the ratio of modeled actual to equilibrium evapotranspiration, and a significant negative response to vapour pressure deficit. However, the simulations showed an enhancement of radial growth in response to increasing atmospheric CO2 concentration (ppm) ([CO2]) during recent decades that is not present in the observations. The discrepancy disappeared when the model was recalibrated on successive 30-year windows. Then the ratio of fine-root mass to foliage area increases by 14% (from 0.127 to 0.144 kg C m-2) as [CO2] increased while the other three estimated parameters remained constant. The absence of a signal of increasing [CO2] has been noted in many tree-ring records, despite the enhancement of photosynthetic rates and water-use efficiency resulting from increasing [CO2]. Our simulations suggest that this behaviour could be explained as a consequence of a shift towards below-ground carbon allocation.

  5. Modelling of thermalhydraulics and reactor physics in simulators

    International Nuclear Information System (INIS)

    Miettinen, J.

    1994-01-01

    The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)

  6. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  7. Genome-Wide Fine-Scale Recombination Rate Variation in Drosophila melanogaster

    Science.gov (United States)

    Song, Yun S.

    2012-01-01

    Estimating fine-scale recombination maps of Drosophila from population genomic data is a challenging problem, in particular because of the high background recombination rate. In this paper, a new computational method is developed to address this challenge. Through an extensive simulation study, it is demonstrated that the method allows more accurate inference, and exhibits greater robustness to the effects of natural selection and noise, compared to a well-used previous method developed for studying fine-scale recombination rate variation in the human genome. As an application, a genome-wide analysis of genetic variation data is performed for two Drosophila melanogaster populations, one from North America (Raleigh, USA) and the other from Africa (Gikongoro, Rwanda). It is shown that fine-scale recombination rate variation is widespread throughout the D. melanogaster genome, across all chromosomes and in both populations. At the fine-scale, a conservative, systematic search for evidence of recombination hotspots suggests the existence of a handful of putative hotspots each with at least a tenfold increase in intensity over the background rate. A wavelet analysis is carried out to compare the estimated recombination maps in the two populations and to quantify the extent to which recombination rates are conserved. In general, similarity is observed at very broad scales, but substantial differences are seen at fine scales. The average recombination rate of the X chromosome appears to be higher than that of the autosomes in both populations, and this pattern is much more pronounced in the African population than the North American population. The correlation between various genomic features—including recombination rates, diversity, divergence, GC content, gene content, and sequence quality—is examined using the wavelet analysis, and it is shown that the most notable difference between D. melanogaster and humans is in the correlation between recombination and

  8. Selecting a Dynamic Simulation Modeling Method for Health Care Delivery Research—Part 2: Report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force

    NARCIS (Netherlands)

    Marshall, Deborah A.; Burgos-Liz, Lina; IJzerman, Maarten Joost; Crown, William; Padula, William V.; Wong, Peter K.; Pasupathy, Kalyan S.; Higashi, Mitchell K.; Osgood, Nathaniel D.

    2015-01-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling

  9. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    Directory of Open Access Journals (Sweden)

    Peter C Lai

    2012-05-01

    Full Text Available Olfactory receptors (ORs are a type of GTP-binding protein-coupled receptor (GPCR. These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can validate experimental functional studies as well as generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level. Here we have shown the specific advantages of simulating the dynamic environment that is associated with OR-odorant interactions. We present a rigorous methodology that ranges from the creation of a computationally-derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.

  10. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  11. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  12. Coupling population dynamics with earth system models: the POPEM model.

    Science.gov (United States)

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  13. A View on Future Building System Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2011-04-01

    This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described by coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).

  14. Integrating functional data to prioritize causal variants in statistical fine-mapping studies.

    Directory of Open Access Journals (Sweden)

    Gleb Kichaev

    2014-10-01

    Full Text Available Standard statistical approaches for prioritization of variants for functional testing in fine-mapping studies either use marginal association statistics or estimate posterior probabilities for variants to be causal under simplifying assumptions. Here, we present a probabilistic framework that integrates association strength with functional genomic annotation data to improve accuracy in selecting plausible causal variants for functional validation. A key feature of our approach is that it empirically estimates the contribution of each functional annotation to the trait of interest directly from summary association statistics while allowing for multiple causal variants at any risk locus. We devise efficient algorithms that estimate the parameters of our model across all risk loci to further increase performance. Using simulations starting from the 1000 Genomes data, we find that our framework consistently outperforms the current state-of-the-art fine-mapping methods, reducing the number of variants that need to be selected to capture 90% of the causal variants from an average of 13.3 to 10.4 SNPs per locus (as compared to the next-best performing strategy. Furthermore, we introduce a cost-to-benefit optimization framework for determining the number of variants to be followed up in functional assays and assess its performance using real and simulation data. We validate our findings using a large scale meta-analysis of four blood lipids traits and find that the relative probability for causality is increased for variants in exons and transcription start sites and decreased in repressed genomic regions at the risk loci of these traits. Using these highly predictive, trait-specific functional annotations, we estimate causality probabilities across all traits and variants, reducing the size of the 90% confidence set from an average of 17.5 to 13.5 variants per locus in this data.

  15. Coupling Solute and Fine Particle Transport with Sand Bed Morphodynamics within a Field Experiment

    Science.gov (United States)

    Phillips, C. B.; Ortiz, C. P.; Schumer, R.; Jerolmack, D. J.; Packman, A. I.

    2017-12-01

    Fine suspended particles are typically considered to pass through streams and rivers as wash load without interacting with the bed, however experiments have demonstrated that hyporheic flow causes advective exchange of fine particles with the stream bed, yielding accumulation of fine particle deposits within the bed. Ultimately, understanding river morphodynamics and ecosystem dynamics requires coupling both fine particle and solute transport with bed morphodynamics. To better understand the coupling between these processes we analyze a novel dataset from a controlled field experiment conducted on Clear Run, a 2nd order sand bed stream located within the North Carolina coastal plain. Data include concentrations of continuously injected conservative solutes and fine particulate tracers measured at various depths within the stream bed, overhead time lapse images of bed forms, stream discharge, and geomorphological surveys of the stream. We use image analysis of bed morphodynamics to assess exchange, retention, and remobilization of solutes and fine particles during constant discharge and a short duration experimental flood. From the images, we extract a time series of bedform elevations and scour depths for the duration of the experiment. The high-resolution timeseries of bed elevation enables us to assess coupling of bed morphodynamics with both the solute and fine particle flux during steady state mobile bedforms prior to the flood and to changing bedforms during the flood. These data allow the application of a stochastic modeling framework relating bed elevation fluctuations to fine particle residence times. This combined experimental and modeling approach ultimately informs our ability to predict not only the fate of fine particulate matter but also associated nutrient and carbon dynamics within streams and rivers.

  16. Atmospheric fate and transport of fine volcanic ash: Does particle shape matter?

    Science.gov (United States)

    White, C. M.; Allard, M. P.; Klewicki, J.; Proussevitch, A. A.; Mulukutla, G.; Genareau, K.; Sahagian, D. L.

    2013-12-01

    Volcanic ash presents hazards to infrastructure, agriculture, and human and animal health. In particular, given the economic importance of intercontinental aviation, understanding how long ash is suspended in the atmosphere, and how far it is transported has taken on greater importance. Airborne ash abrades the exteriors of aircraft, enters modern jet engines and melts while coating interior engine parts causing damage and potential failure. The time fine ash stays in the atmosphere depends on its terminal velocity. Existing models of ash terminal velocities are based on smooth, quasi-spherical particles characterized by Stokes velocity. Ash particles, however, violate the various assumptions upon which Stokes flow and associated models are based. Ash particles are non-spherical and can have complex surface and internal structure. This suggests that particle shape may be one reason that models fail to accurately predict removal rates of fine particles from volcanic ash clouds. The present research seeks to better parameterize predictive models for ash particle terminal velocities, diffusivity, and dispersion in the atmospheric boundary layer. The fundamental hypothesis being tested is that particle shape irreducibly impacts the fate and transport properties of fine volcanic ash. Pilot studies, incorporating modeling and experiments, are being conducted to test this hypothesis. Specifically, a statistical model has been developed that can account for actual volcanic ash size distributions, complex ash particle geometry, and geometry variability. Experimental results are used to systematically validate and improve the model. The experiments are being conducted at the Flow Physics Facility (FPF) at UNH. Terminal velocities and dispersion properties of fine ash are characterized using still air drop experiments in an unconstrained open space using a homogenized mix of source particles. Dispersion and sedimentation dynamics are quantified using particle image

  17. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  18. APROS 3-D core models for simulators and plant analyzers

    International Nuclear Information System (INIS)

    Puska, E.K.

    1999-01-01

    The 3-D core models of APROS simulation environment can be used in simulator and plant analyzer applications, as well as in safety analysis. The key feature of APROS models is that the same physical models can be used in all applications. For three-dimensional reactor cores the APROS models cover both quadratic BWR and PWR cores and the hexagonal lattice VVER-type cores. In APROS environment the user can select the number of flow channels in the core and either five- or six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the channel description have a decisive effect on the calculation time of the 3-D core model and thus just these selection make at present the major difference between a safety analysis model and a training simulator model. The paper presents examples of various types of 3-D LWR-type core descriptions for simulator and plant analyzer use and discusses the differences of calculation speed and physical results between a typical safety analysis model description and a real-time simulator model description in transients. (author)

  19. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  20. Modeling and simulation of the bioprocess with recirculation

    Directory of Open Access Journals (Sweden)

    Žerajić Stanko

    2007-01-01

    Full Text Available The bioprocess models with recirculation present an integration of the model of continuous bioreaction system and the model of separation system. The reaction bioprocess is integrated with separation the biomass, formed product, no consumed substrate or inhibitory substance. In this paper the simulation model of recirculation bioprocess was developed, which may be applied for increasing the biomass productivity and product biosynthesis increasing the conversion of a substrate-to-product, mixing efficiency and secondary C02 separation. The goal of the work is optimal bioprocess configuration, which is determined by simulation optimization. The optimal hemostat state was used as referent. Step-by-step simulation method is necessary because the initial bioprocess state is changing with recirculation in each step. The simulation experiment confirms that at the recirculation ratio a. = 0.275 and the concentration factor C = 4 the maximum glucose conversion to ethanol and at a dilution rate ten times larger.

  1. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  2. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  3. A Simulation and Modeling Framework for Space Situational Awareness

    International Nuclear Information System (INIS)

    Olivier, S.S.

    2008-01-01

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated

  4. Beyond Fine Tuning: Adding capacity to leverage few labels

    Energy Technology Data Exchange (ETDEWEB)

    Hodas, Nathan O.; Shaffer, Kyle J.; Yankov, Artem; Corley, Courtney D.; Anderson, Aryk L.

    2017-12-09

    In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural networks or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.

  5. Fine Tuning Mission to reach those influenced by Darwinism

    Directory of Open Access Journals (Sweden)

    Roger Tucker

    2014-01-01

    Full Text Available The scientifically aware section of the South African population is increasing. Many are being exposed to the concept of Darwinian evolution. Exposure has generated a religious sub �people group� who have problems with Christianity because they have been influenced by the naturalistic element in Darwinian philosophy. Christian antagonism towards evolution has often prejudiced them unfavourably towards the gospel. Recent discoveries concerning the fine-tuning of the universe have now presented a window of opportunity for overcoming this. It may enable the church to �fine-tune� its missionary approach to present them with the gospel in a more acceptable manner. It is suggested that Paul�s Areopagus speech provides a model for such cross-cultural evangelism. A section is included at the end, describing some objections that have been raised against the cosmological fine-tuning apologetic.

  6. Evaluation of articulation simulation system using artificial maxillectomy models.

    Science.gov (United States)

    Elbashti, M E; Hattori, M; Sumita, Y I; Taniguchi, H

    2015-09-01

    Acoustic evaluation is valuable for guiding the treatment of maxillofacial defects and determining the effectiveness of rehabilitation with an obturator prosthesis. Model simulations are important in terms of pre-surgical planning and pre- and post-operative speech function. This study aimed to evaluate the acoustic characteristics of voice generated by an articulation simulation system using a vocal tract model with or without artificial maxillectomy defects. More specifically, we aimed to establish a speech simulation system for maxillectomy defect models that both surgeons and maxillofacial prosthodontists can use in guiding treatment planning. Artificially simulated maxillectomy defects were prepared according to Aramany's classification (Classes I-VI) in a three-dimensional vocal tract plaster model of a subject uttering the vowel /a/. Formant and nasalance acoustic data were analysed using Computerized Speech Lab and the Nasometer, respectively. Formants and nasalance of simulated /a/ sounds were successfully detected and analysed. Values of Formants 1 and 2 for the non-defect model were 675.43 and 976.64 Hz, respectively. Median values of Formants 1 and 2 for the defect models were 634.36 and 1026.84 Hz, respectively. Nasalance was 11% in the non-defect model, whereas median nasalance was 28% in the defect models. The results suggest that an articulation simulation system can be used to help surgeons and maxillofacial prosthodontists to plan post-surgical defects that will be facilitate maxillofacial rehabilitation. © 2015 John Wiley & Sons Ltd.

  7. Mathematical model and simulations of radiation fluxes from buried radionuclides

    International Nuclear Information System (INIS)

    Ahmad Saat

    1999-01-01

    A mathematical model and a simple Monte Carlo simulations were developed to predict radiation fluxes from buried radionuclides. The model and simulations were applied to measured (experimental) data. The results of the mathematical model showed good acceptable order of magnitude agreement. A good agreement was also obtained between the simple simulations and the experimental results. Thus, knowing the radionuclide distribution profiles in soil from a core sample, it can be applied to the model or simulations to estimate the radiation fluxes emerging from the soil surface. (author)

  8. Facebook's personal page modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.

  9. Mathematical simulation of a waste rock heap

    International Nuclear Information System (INIS)

    Scharer, J.M.; Pettit, C.M.; Chambers, D.B.; Kwong, E.C.

    1994-01-01

    A computer model has been developed to simulate the generation of acidic drainage in waste rock piles. The model considers the kinetic rates of biological and chemical oxidation of sulfide minerals (pyrite, pyrrhotite) present as fines and rock particles, as well as chemical processes such as dissolution (kinetic or equilibrium controlled), complexation (from equilibrium and stoichiometry of several complexes), and precipitation (formation of complexes and secondary minerals). Through mass balance equations and solubility constraints (e.g., pH, phase equilibria) the model keeps track of the movement of chemical species through the waste pile and provides estimates of the quality of seepage (pH, sulfate, iron, acidity, etc.) leaving the heap. The model has been expanded to include the dissolution (thermodynamic and sorption equilibrium), adsorption and coprecipitation of uranium and radium. The model was applied to simulate waste rock heaps in British Columbia, Canada and in Thueringia, Germany. To improve the accuracy and confidence of long-term predictions of seepage quality, the entire history of the heaps was simulated. Cumulative acidity loads and water treatment considerations were used as a basis for evaluation of various decommissioning alternatives. Simulation of the technical leaching history of a heap in Germany showed it will generate contaminated leachate requiring treatment for acidity and radioactivity for several hundred years; cover installation was shown to provide a significant reduction of potential burdens, although chemical treatment would still be required beyond 100 years

  10. Fine-granularity inference and estimations to network traffic for SDN.

    Directory of Open Access Journals (Sweden)

    Dingde Jiang

    Full Text Available An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN. However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  11. Fine-granularity inference and estimations to network traffic for SDN.

    Science.gov (United States)

    Jiang, Dingde; Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  12. Optical modeling and simulation of thin-film photovoltaic devices

    CERN Document Server

    Krc, Janez

    2013-01-01

    In wafer-based and thin-film photovoltaic (PV) devices, the management of light is a crucial aspect of optimization since trapping sunlight in active parts of PV devices is essential for efficient energy conversions. Optical modeling and simulation enable efficient analysis and optimization of the optical situation in optoelectronic and PV devices. Optical Modeling and Simulation of Thin-Film Photovoltaic Devices provides readers with a thorough guide to performing optical modeling and simulations of thin-film solar cells and PV modules. It offers insight on examples of existing optical models

  13. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    A computer simulation model is a detailed working hypothesis about a given system. The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general pragmatic approach to model building is discussed; techniques are ...

  14. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  15. Fine-scale topography in sensory systems: insights from Drosophila and vertebrates.

    Science.gov (United States)

    Kaneko, Takuya; Ye, Bing

    2015-09-01

    To encode the positions of sensory stimuli, sensory circuits form topographic maps in the central nervous system through specific point-to-point connections between pre- and postsynaptic neurons. In vertebrate visual systems, the establishment of topographic maps involves the formation of a coarse topography followed by that of fine-scale topography that distinguishes the axon terminals of neighboring neurons. It is known that intrinsic differences in the form of broad gradients of guidance molecules instruct coarse topography while neuronal activity is required for fine-scale topography. On the other hand, studies in the Drosophila visual system have shown that intrinsic differences in cell adhesion among the axon terminals of neighboring neurons instruct the fine-scale topography. Recent studies on activity-dependent topography in the Drosophila somatosensory system have revealed a role of neuronal activity in creating molecular differences among sensory neurons for establishing fine-scale topography, implicating a conserved principle. Here we review the findings in both Drosophila and vertebrates and propose an integrated model for fine-scale topography.

  16. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  17. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  18. Chemical composition of Martian fines

    Science.gov (United States)

    Clark, B. C.; Baird, A. K.; Weldon, R. J.; Tsusaki, D. M.; Schnabel, L.; Candelaria, M. P.

    1982-01-01

    Of the 21 samples acquired for the Viking X-ray fluorescence spectrometer, 17 were analyzed to high precision. Compared to typical terrestrial continental soils and lunar mare fines, the Martian fines are lower in Al, higher in Fe, and much higher in S and Cl concentrations. Protected fines at the two lander sites are almost indistinguishable, but concentration of the element S is somewhat higher at Utopia. Duricrust fragments, successfully acquired only at the Chryse site, invariably contained about 50% higher S than fines. No elements correlate positively with S, except Cl and possibly Mg. A sympathetic variation is found among the triad Si, Al, Ca; positive correlation occurs between Ti and Fe. Sample variabilities are as great within a few meters as between lander locations (4500 km apart), implying the existence of a universal Martian regolith component of constant average composition. The nature of the source materials for the regolith fines must be mafic to ultramafic.

  19. Viscosity of bound water and model of proton relaxation in fine-dispersed substances at the presence of adsorbed paramagnetic ions

    International Nuclear Information System (INIS)

    Fedodeev, V.I.

    1975-01-01

    A microviscosity model of proton relaxation in pure liquids and in solutions of paramagnetic ions is examined. It is shown that the influence of adsorbed paramagnetic centers on proton relaxation in finely dispersed substances is significantly weaker than in solutions. A 'two-phase' relaxation model is used in determining the parameters of the bound liquid (water) using nuclear magnetic resonance data. The relations obtained with the model are used to compute the viscosity of water in clay. The value is of the same order of magnitude as that obtained by other methods

  20. Viscosity of bound water and model of proton relaxation in fine-dispersed substances at the presence of adsorbed paramagnetic ions

    Energy Technology Data Exchange (ETDEWEB)

    Fedodeev, V I

    1975-09-01

    A microviscosity model of proton relaxation in pure liquids and in solutions of paramagnetic ions is examined. It is shown that the influence of adsorbed paramagnetic centers on proton relaxation in finely dispersed substances is significantly weaker than in solutions. A 'two-phase' relaxation model is used in determining the parameters of the bound liquid (water) using nuclear magnetic resonance data. The relations obtained with the model are used to compute the viscosity of water in clay. The value is of the same order of magnitude as that obtained by other methods.

  1. Sources of mutagenic activity in urban fine particles

    International Nuclear Information System (INIS)

    Stevens, R.K.; Lewis, C.W.; Dzubay, T.G.; Cupitt, L.T.; Lewtas, J.

    1990-01-01

    Samples were collected during the winter of 1984-1985 in the cities of Albuquerque, NM and Raleigh NC as part of a US Environmental Protection Agency study to evaluate methods to determine the emission sources contributing to the mutagenic properties of extractable organic matter (EOM) present in fine particles. Data derived from the analysis of the composition of these fine particles served as input to a multi-linear regression (MLR) model used to calculate the relative contribution of wood burning and motor vehicle sources to mutagenic activity observed in the extractable organic matter. At both sites the mutagenic potency of EOM was found to be greater (3-5 times) for mobile sources when compared to wood smoke extractable organics. Carbon-14 measurements which give a direct determination of the amount of EOM that originated from wood burning were in close agreement with the source apportionment results derived from the MLR model

  2. Fine tuning and MOND in a metamaterial "multiverse".

    Science.gov (United States)

    Smolyaninov, Igor I; Smolyaninova, Vera N

    2017-08-14

    We consider the recently suggested model of a multiverse based on a ferrofluid. When the ferrofluid is subjected to a modest external magnetic field, the nanoparticles inside the ferrofluid form small hyperbolic metamaterial domains, which from the electromagnetic standpoint behave as individual "Minkowski universes" exhibiting different "laws of physics", such as different strength of effective gravity, different versions of modified Newtonian dynamics (MOND) and different radiation lifetimes. When the ferrofluid "multiverse" is populated with atomic or molecular species, and these species are excited using an external laser source, the radiation lifetimes of atoms and molecules in these "universes" depend strongly on the individual physical properties of each "universe" via the Purcell effect. Some "universes" are better fine-tuned than others to sustain the excited states of these species. Thus, the ferrofluid-based metamaterial "multiverse" may be used to study models of MOND and to illustrate the fine-tuning mechanism in cosmology.

  3. Can a virtual reality assessment of fine motor skill predict successful central line insertion?

    Science.gov (United States)

    Mohamadipanah, Hossein; Parthiban, Chembian; Nathwani, Jay; Rutherford, Drew; DiMarco, Shannon; Pugh, Carla

    2016-10-01

    Due to the increased use of peripherally inserted central catheter lines, central lines are not performed as frequently. The aim of this study is to evaluate whether a virtual reality (VR)-based assessment of fine motor skills can be used as a valid and objective assessment of central line skills. Surgical residents (N = 43) from 7 general surgery programs performed a subclavian central line in a simulated setting. Then, they participated in a force discrimination task in a VR environment. Hand movements from the subclavian central line simulation were tracked by electromagnetic sensors. Gross movements as monitored by the electromagnetic sensors were compared with the fine motor metrics calculated from the force discrimination tasks in the VR environment. Long periods of inactivity (idle time) during needle insertion and lack of smooth movements, as detected by the electromagnetic sensors, showed a significant correlation with poor force discrimination in the VR environment. Also, long periods of needle insertion time correlated to the poor performance in force discrimination in the VR environment. This study shows that force discrimination in a defined VR environment correlates to needle insertion time, idle time, and hand smoothness when performing subclavian central line placement. Fine motor force discrimination may serve as a valid and objective assessment of the skills required for successful needle insertion when placing central lines. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Fine and Gross Motor Task Performance When Using Computer-Based Video Models by Students with Autism and Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Swindle, Catherine O.

    2013-01-01

    This investigation examined the effects of video modeling on the fine and gross motor task performance by three students with a diagnosis of moderate intellectual disability (Group 1) and by three students with a diagnosis of autism spectrum disorder (Group 2). Using a multiple probe design across three sets of tasks, the study examined the…

  5. Soflty broken supersymmetry and the fine-tuning problem

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.E.

    1984-02-20

    The supersymmetry of the simple Wess-Zumino model is broken, in the tree-approximation, by adding all possible parity-even(mass)-dimension 2 and 3 terms. The model is then renormalized using BPHZ and the normal product algorithm, such that supersymmetry is only softly broken (in the original sense of Schroer and Symanzik). We show that, within the above renormalization scheme, none of the added breaking terms give rise to technical fine-tuning problems (defined in the sense of Gildener) in larger models, with scalar multiplets and hierarchy of mass scales, which is in contrast to what we obtain via analytic schemes such as dimensional renormalization, or supersymmetry extension of which. The discrepancy (which can be shown to persist in more general models) originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Emphasizing that the issue is purely technical (as opposed to physical) in origin, and that all physical properties are scheme-independent (as they should be), we conclude that the technical fine-tuning problem, in the specific sense used in this paper, being scheme dependent, is not a well-defined issue within the context of renormalized perturbation theory. 30 references.

  6. Soflty broken supersymmetry and the fine-tuning problem

    International Nuclear Information System (INIS)

    Foda, O.E.

    1984-01-01

    The supersymmetry of the simple Wess-Zumino model is broken, in the tree-approximation, by adding all possible parity-even[mass]-dimension 2 and 3 terms. The model is then renormalized using BPHZ and the normal product algorithm, such that supersymmetry is only softly broken (in the original sense of Schroer and Symanzik). We show that, within the above renormalization scheme, none of the added breaking terms give rise to technical fine-tuning problems (defined in the sense of Gildener) in larger models, with scalar multiplets and hierarchy of mass scales, which is in contrast to what we obtain via analytic schemes such as dimensional renormalization, or supersymmetry extension of which. The discrepancy (which can be shown to persist in more general models) originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Emphasizing that the issue is purely technical (as opposed to physical) in origin, and that all physical properties are scheme-independent (as they should be), we conclude that the technical fine-tuning problem, in the specific sense used in this paper, being scheme dependent, is not a well-defined issue within the context of renormalized perturbation theory. (orig.)

  7. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  8. Standard for Models and Simulations

    Science.gov (United States)

    Steele, Martin J.

    2016-01-01

    This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.

  9. An Individual-based Probabilistic Model for Fish Stock Simulation

    Directory of Open Access Journals (Sweden)

    Federico Buti

    2010-08-01

    Full Text Available We define an individual-based probabilistic model of a sole (Solea solea behaviour. The individual model is given in terms of an Extended Probabilistic Discrete Timed Automaton (EPDTA, a new formalism that is introduced in the paper and that is shown to be interpretable as a Markov decision process. A given EPDTA model can be probabilistically model-checked by giving a suitable translation into syntax accepted by existing model-checkers. In order to simulate the dynamics of a given population of soles in different environmental scenarios, an agent-based simulation environment is defined in which each agent implements the behaviour of the given EPDTA model. By varying the probabilities and the characteristic functions embedded in the EPDTA model it is possible to represent different scenarios and to tune the model itself by comparing the results of the simulations with real data about the sole stock in the North Adriatic sea, available from the recent project SoleMon. The simulator is presented and made available for its adaptation to other species.

  10. Simulating WTP Values from Random-Coefficient Models

    OpenAIRE

    Maurus Rischatsch

    2009-01-01

    Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...

  11. NMC and the Fine-Tuning Problem on the Brane

    Directory of Open Access Journals (Sweden)

    A. Safsafi

    2014-01-01

    Full Text Available We propose a new solution to the fine-tuning problem related to coupling constant λ of the potential. We study a quartic potential of the form λϕ4 in the framework of the Randall-Sundrum type II braneworld model in the presence of a Higgs field which interacts nonminimally with gravity via a possible interaction term of the form -(ξ/2ϕ2R. Using the conformal transformation techniques, the slow-roll parameters in high energy limit are reformulated in the case of a nonminimally coupled scalar field. We show that, for some value of a coupling parameter ξ and brane tension T, we can eliminate the fine-tuning problem. Finally, we present graphically the solutions of several values of the free parameters of the model.

  12. Extended behavioural device modelling and circuit simulation with Qucs-S

    Science.gov (United States)

    Brinson, M. E.; Kuznetsov, V.

    2018-03-01

    Current trends in circuit simulation suggest a growing interest in open source software that allows access to more than one simulation engine while simultaneously supporting schematic drawing tools, behavioural Verilog-A and XSPICE component modelling, and output data post-processing. This article introduces a number of new features recently implemented in the 'Quite universal circuit simulator - SPICE variant' (Qucs-S), including structure and fundamental schematic capture algorithms, at the same time highlighting their use in behavioural semiconductor device modelling. Particular importance is placed on the interaction between Qucs-S schematics, equation-defined devices, SPICE B behavioural sources and hardware description language (HDL) scripts. The multi-simulator version of Qucs is a freely available tool that offers extended modelling and simulation features compared to those provided by legacy circuit simulators. The performance of a number of Qucs-S modelling extensions are demonstrated with a GaN HEMT compact device model and data obtained from tests using the Qucs-S/Ngspice/Xyce ©/SPICE OPUS multi-engine circuit simulator.

  13. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, A.; Jauch, Clemens; Soerensen, P.

    The present report describes the dynamic wind turbine models implemented in the power system simulation tool DIgSILENT. The developed models are a part of the results of a national research project, whose overall objective is to create a model database in different simulation tools. The report...

  14. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  15. Analog quantum simulation of generalized Dicke models in trapped ions

    Science.gov (United States)

    Aedo, Ibai; Lamata, Lucas

    2018-04-01

    We propose the analog quantum simulation of generalized Dicke models in trapped ions. By combining bicromatic laser interactions on multiple ions we can generate all regimes of light-matter coupling in these models, where here the light mode is mimicked by a motional mode. We present numerical simulations of the three-qubit Dicke model both in the weak field (WF) regime, where the Jaynes-Cummings behavior arises, and the ultrastrong coupling (USC) regime, where a rotating-wave approximation cannot be considered. We also simulate the two-qubit biased Dicke model in the WF and USC regimes and the two-qubit anisotropic Dicke model in the USC regime and the deep-strong coupling regime. The agreement between the mathematical models and the ion system convinces us that these quantum simulations can be implemented in the laboratory with current or near-future technology. This formalism establishes an avenue for the quantum simulation of many-spin Dicke models in trapped ions.

  16. Equivalence of two models in single-phase multicomponent flow simulations

    KAUST Repository

    Wu, Yuanqing

    2016-02-28

    In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.

  17. Equivalence of two models in single-phase multicomponent flow simulations

    KAUST Repository

    Wu, Yuanqing; Sun, Shuyu

    2016-01-01

    In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.

  18. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  19. New modelling strategy for IRIS dynamic response simulation

    International Nuclear Information System (INIS)

    Cammi, A.; Ricotti, M. E.; Casella, F.; Schiavo, F.

    2004-01-01

    The pressurized light water cooled, medium power (1000 MWt) IRIS (International Reactor Innovative and Secure) has been under development for four years by an international consortium of over 21 organizations from ten countries. The plant conceptual design was completed in 2001 and the preliminary design is nearing completion. The pre-application licensing process with NRC started in October, 2002 and IRIS is one of the designs considered by US utilities as part of the ESP (Early Site Permit) process. In this paper the development of an adequate modeling and simulation tool for Dynamics and Control tasks is presented. The key features of the developed simulator are: a) Modularity: the system model is built by connecting the models of its components, which are written independently of their boundary conditions; b) Openness: the code of each component model is clearly readable and close to the original equations and easily customised by the experienced user; c) Efficiency: the simulation code is fast; d) Tool support: the simulation tool is based on reliable, tested and well-documented software. To achieve these objectives, the Modelica language was used as a basis for the development of the simulator. The Modelica language is the results of recent advances in the field of object-oriented, multi-physics, dynamic system modelling. The language definition is open-source and it has already been successfully adopted in several industrial fields. To provide the required capabilities for the analysis, specific models for nuclear reactor components have been developed, to be applied for the dynamic simulation of the IRIS integral reactor, albeit keeping general validity for PWR plants. The following Modelica models have been written to satisfy the IRIS modelling requirements and are presented in this paper: neutronics point kinetic, fuel heat transfer, control rods model, including the innovative internal drive mechanism type, and a once-through type steam generator, thus

  20. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place