WorldWideScience

Sample records for subgrid scale models

  1. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood

    2011-08-01

    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  2. Simple subgrid scale stresses models for homogeneous isotropic turbulence

    Science.gov (United States)

    Aupoix, B.; Cousteix, J.

    Large eddy simulations employing the filtering of Navier-Stokes equations highlight stresses, related to the interaction between large scales below the cut and small scales above it, which have been designated 'subgrid scale stresses'. Their effects include both the energy flux through the cut and a component of viscous diffusion. The eddy viscosity introduced in the subgrid scale models which give the correct energy flux through the cut by comparison with spectral closures is shown to depend only on the small scales. The Smagorinsky (1963) model can only be obtained if the cut lies in the middle of the inertial range. A novel model which takes the small scales into account statistically, and includes the effects of viscosity, is proposed and compared with classical models for the Comte-Bellot and Corrsin (1971) experiment.

  3. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  4. Exploring nonlinear subgrid-scale models and new characteristic length scales for large-eddy simulation

    NARCIS (Netherlands)

    Silvis, Maurits H.; Trias, F. Xavier; Abkar, M.; Bae, H.J.; Lozano-Duran, A.; Verstappen, R.W.C.P.; Moin, Parviz; Urzay, Javier

    2016-01-01

    We study subgrid-scale modeling for large-eddy simulation of anisotropic turbulent flows on anisotropic grids. In particular, we show how the addition of a velocity-gradient-based nonlinear model term to an eddy viscosity model provides a better representation of energy transfer. This is shown to

  5. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    CERN Document Server

    Silvis, Maurits H; Verstappen, Roel

    2016-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...

  6. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models

    OpenAIRE

    M. Cassiani; Vinuesa, J.F.; Galmarini, S.; Denby, B

    2010-01-01

    The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF) transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES) data devised to test specifically the...

  7. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  8. Multifractal subgrid-scale modeling within a variational multiscale method for large-eddy simulation of turbulent flow

    Science.gov (United States)

    Rasthofer, U.; Gravemeier, V.

    2013-02-01

    Multifractal subgrid-scale modeling within a variational multiscale method is proposed for large-eddy simulation of turbulent flow. In the multifractal subgrid-scale modeling approach, the subgrid-scale velocity is evaluated from a multifractal description of the subgrid-scale vorticity, which is based on the multifractal scale similarity of gradient fields in turbulent flow. The multifractal subgrid-scale modeling approach is integrated into a variational multiscale formulation, which constitutes a new application of the variational multiscale concept. A focus of this study is on the application of the multifractal subgrid-scale modeling approach to wall-bounded turbulent flow. Therefore, a near-wall limit of the multifractal subgrid-scale modeling approach is derived in this work. The novel computational approach of multifractal subgrid-scale modeling within a variational multiscale formulation is applied to turbulent channel flow at various Reynolds numbers, turbulent flow over a backward-facing step and turbulent flow past a square-section cylinder, which are three of the most important and widely-used benchmark examples for wall-bounded turbulent flow. All results presented in this study confirm a very good performance of the proposed method. Compared to a dynamic Smagorinsky model and a residual-based variational multiscale method, improved results are obtained. Moreover, it is demonstrated that the subgrid-scale energy transfer incorporated by the proposed method very well approximates the expected energy transfer as obtained from appropriately filtered direct numerical simulation data. The computational cost is notably reduced compared to a dynamic Smagorinsky model and only marginally increased compared to a residual-based variational multiscale method.

  9. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models

    Directory of Open Access Journals (Sweden)

    M. Cassiani

    2010-01-01

    Full Text Available The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES data devised to test specifically the effect of sub-grid scale emission heterogeneity on boundary layer concentration fluctuations. The sub-grid scale emission variability is assimilated in the model as a PDF of the emissions. The stochastic fields method shows excellent agreement with the LES data without adjustment of the constants used in the mesoscale model. The stochastic fields method is a stochastic solution of the transport equations for the concentration PDF of dispersing scalars, therefore it possesses the ability to handle chemistry of any complexity without the need to introduce additional closures for the high order statistics of chemical species. This study shows for the first time the feasibility of applying this method to mesoscale chemical transport models.

  10. Lagrangian scheme to model subgrid-scale mixing and spreading in heterogeneous porous media

    Science.gov (United States)

    Herrera, P. A.; Cortínez, J. M.; Valocchi, A. J.

    2017-04-01

    Small-scale heterogeneity of permeability controls spreading, dilution, and mixing of solute plumes at large scale. However, conventional numerical simulations of solute transport are unable to resolve scales of heterogeneity below the grid scale. We propose a Lagrangian numerical approach to implement closure models to account for subgrid-scale spreading and mixing in Darcy-scale numerical simulations of solute transport in mildly heterogeneous porous media. The novelty of the proposed approach is that it considers two different dispersion coefficients to account for advective spreading mechanisms and local-scale dispersion. Using results of benchmark numerical simulations, we demonstrate that the proposed approach is able to model subgrid-scale spreading and mixing provided there is a correct choice of block-scale dispersion coefficient. We also demonstrate that for short travel times it is only possible to account for spreading or mixing using a single block-scale dispersion coefficient. Moreover, we show that it is necessary to use time-dependent dispersion coefficients to obtain correct mixing rates. On the contrary, for travel times that are large in comparison to the typical dispersive time scale, it is possible to use a single expression to compute the block-dispersion coefficient, which is equal to the asymptotic limit of the block-scale macrodispersion coefficient proposed by Rubin et al. (1999). Our approach provides a flexible and efficient way to model subgrid-scale mixing in numerical models of large-scale solute transport in heterogeneous aquifers. We expect that these findings will help to better understand the applicability of the advection-dispersion-equation (ADE) to simulate solute transport at the Darcy scale in heterogeneous porous media.

  11. Large eddy simulation of flow over a wall-mounted cube: Comparison of different semi dynamic subgrid scale models

    Directory of Open Access Journals (Sweden)

    M Nooroullahi

    2016-09-01

    Full Text Available In this paper the ability of different semi dynamic subgrid scale models for large eddy simulation was studied in a challenging test case. The semi dynamic subgrid scale models were examined in this investigation is Selective Structure model, Coherent structure model, Wall Adaptive Large Eddy model. The test case is a simulation of flow over a wall-mounted cube in a channel. The results of these models were compared to structure function model, dynamic models and experimental data at Reynolds number 40000. Results show that these semi dynamic models could improve the ability of numerical simulation in comparison with other models which use a constant coefficient for simulation of subgrid scale viscosity. In addition, these models don't have the instability problems of dynamic models.

  12. Effects of Implementing Subgrid-Scale Cloud-Radiation Interactions in a Regional Climate Model

    Science.gov (United States)

    Herwehe, J. A.; Alapaty, K.; Otte, T.; Nolte, C. G.

    2012-12-01

    Interactions between atmospheric radiation, clouds, and aerosols are the most important processes that determine the climate and its variability. In regional scale models, when used at relatively coarse spatial resolutions (e.g., larger than 1 km), convective cumulus clouds need to be parameterized as subgrid-scale clouds. Like many groups, our regional climate modeling group at the EPA uses the Weather Research & Forecasting model (WRF) as a regional climate model (RCM). One of the findings from our RCM studies is that the summertime convective systems simulated by the WRF model are highly energetic, leading to excessive surface precipitation. We also found that the WRF model does not consider the interactions between convective clouds and radiation, thereby omitting an important process that drives the climate. Thus, the subgrid-scale cloudiness associated with convective clouds (from shallow cumuli to thunderstorms) does not exist and radiation passes through the atmosphere nearly unimpeded, potentially leading to overly energetic convection. This also has implications for air quality modeling systems that are dependent upon cloud properties from the WRF model, as the failure to account for subgrid-scale cloudiness can lead to problems such as the underrepresentation of aqueous chemistry processes within clouds and the overprediction of ozone from overactive photolysis. In an effort to advance the climate science of the cloud-aerosol-radiation (CAR) interactions in RCM systems, as a first step we have focused on linking the cumulus clouds with the radiation processes. To this end, our research group has implemented into WRF's Kain-Fritsch (KF) cumulus parameterization a cloudiness formulation that is widely used in global earth system models (e.g., CESM/CAM5). Estimated grid-scale cloudiness and associated condensate are adjusted to account for the subgrid clouds and then passed to WRF's Rapid Radiative Transfer Model - Global (RRTMG) radiation schemes to affect

  13. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    Science.gov (United States)

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  14. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    Science.gov (United States)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  15. A dynamic subgrid scale model for Large Eddy Simulations based on the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric J.; Duraisamy, Karthik

    2017-11-01

    The development of reduced models for complex multiscale problems remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al. [1], which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a new M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the 'dynamic-MZ-τ' model, is that it is parameter-free and has a structural form imposed by the mathematics of the coarse-graining process (rather than the phenomenological assumptions made by the modeler, such as in classical subgrid scale models). To promote the applicability of M-Z models in general, two procedures are presented to compute the resulting model form, helping to bypass the tedious error-prone algebra that has proven to be a hindrance to the construction of M-Z-based models for complex dynamical systems. While the new formulation is applicable to the solution of general partial differential equations, demonstrations are presented in the context of Large Eddy Simulation closures for the Burgers equation, decaying homogeneous turbulence, and turbulent channel flow. The performance of the model and validity of the underlying assumptions are investigated in detail.

  16. A scale-aware subgrid model for quasi-geostrophic turbulence

    Science.gov (United States)

    Bachman, Scott D.; Fox-Kemper, Baylor; Pearson, Brodie

    2017-02-01

    This paper introduces two methods for dynamically prescribing eddy-induced diffusivity, advection, and viscosity appropriate for primitive equation models with resolutions permitting the forward potential enstrophy cascade of quasi-geostrophic dynamics, such as operational ocean models and high-resolution climate models with O>(25>) km horizontal resolution and finer. Where quasi-geostrophic dynamics fail (e.g., the equator, boundary layers, and deep convection), the method reverts to scalings based on a matched two-dimensional enstrophy cascade. A principle advantage is that these subgrid models are scale-aware, meaning that the model is suitable over a range of grid resolutions: from mesoscale grids that just permit baroclinic instabilities to grids below the submesoscale where ageostrophic effects dominate. Two approaches are presented here using Large Eddy Simulation (LES) techniques adapted for three-dimensional rotating, stratified turbulence. The simpler approach has one nondimensional parameter, Λ, which has an optimal value near 1. The second approach dynamically optimizes Λ during simulation using a test filter. The new methods are tested in an idealized scenario by varying the grid resolution, and their use improves the spectra of potential enstrophy and energy in comparison to extant schemes. The new methods keep the gridscale Reynolds and Péclet numbers near 1 throughout the domain, which confers robust numerical stability and minimal spurious diapycnal mixing. Although there are no explicit parameters in the dynamic approach, there is strong sensitivity to the choice of test filter. Designing test filters for heterogeneous ocean turbulence adds cost and uncertainty, and we find the dynamic method does not noticeably improve over setting Λ = 1.

  17. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  18. One-equation sub-grid scale (SGS) modelling for Euler-Euler large eddy simulation (EELES) of dispersed bubbly flow

    NARCIS (Netherlands)

    Niceno, B.; Dhotre, M.T.; Deen, N.G.

    2008-01-01

    In this work, we have presented a one-equation model for sub-grid scale (SGS) kinetic energy and applied it for an Euler-Euler large eddy simulation (EELES) of a bubble column reactor. The one-equation model for SGS kinetic energy shows improved predictions over the state-of-the-art dynamic

  19. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    Energy Technology Data Exchange (ETDEWEB)

    O' Kane, T J; Frederiksen, J S [Centre for Australian Weather and Climate Research, Bureau of Meteorology, 700 Collins St, Docklands, Melbourne, VIC (Australia) and CSIRO Marine and Atmospheric Research, Aspendale, VIC (Australia)], E-mail: t.okane@bom.gov.au

    2008-12-15

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  20. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids

    Science.gov (United States)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua

    2015-07-01

    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  1. Exploring the Limits of the Dynamic Procedure for Modeling Subgrid-Scale Stresses in LES of Inhomogeneous Flows.

    Science.gov (United States)

    Le, A.-T.; Kim, J.; Coleman, G.

    1996-11-01

    One of the primary reasons dynamic subgrid-scale (SGS) models are more successful than those that are `hand-tuned' is thought to be their insensitivity to numerical and modeling parameters. Jiménez has recently demonstrated that large-eddy simulations (LES) of decaying isotropic turbulence using a dynamic Smagorinsky model yield correct decay rates -- even when the model is subjected to a range of artificial perturbations. The objective of the present study is to determine to what extent this `self-adjusting' feature of dynamic SGS models is found in LES of inhomogeneous flows. The effects of numerical and modeling parameters on the accuracy of LES solutions of fully developed and developing turbulent channel flow are studied, using a spectral code and various dynamic models (including those of Lilly et al. and Meneveau et al.); other modeling parameters tested include the filter-width ratio and the effective magnitude of the Smagorinsky coefficient. Numerical parameters include the form of the convective term and the type of test filter (sharp-cutoff versus tophat). The resulting LES statistics are found to be surprisingly sensitive to the various parameter choices, which implies that more care than is needed for homogeneous-flow simulations must be exercised when performing LES of inhomogeneous flows.

  2. A Physically Based Horizontal Subgrid-scale Turbulent Mixing Parameterization for the Convective Boundary Layer in Mesoscale Models

    Science.gov (United States)

    Zhou, Bowen; Xue, Ming; Zhu, Kefeng

    2017-04-01

    Compared to the representation of vertical turbulent mixing through various PBL schemes, the treatment of horizontal turbulence mixing in the boundary layer within mesoscale models, with O(10) km horizontal grid spacing, has received much less attention. In mesoscale models, subgrid-scale horizontal fluxes most often adopt the gradient-diffusion assumption. The horizontal mixing coefficients are usually set to a constant, or through the 2D Smagorinsky formulation, or in some cases based on the 1.5-order turbulence kinetic energy (TKE) closure. In this work, horizontal turbulent mixing parameterizations using physically based characteristic velocity and length scales are proposed for the convective boundary layer based on analysis of a well-resolved, wide-domain large-eddy simulation (LES). The proposed schemes involve different levels of sophistication. The first two schemes can be used together with first-order PBL schemes, while the third uses TKE to define its characteristic velocity scale and can be used together with TKE-based higher-order PBL schemes. The current horizontal mixing formulations are also assessed a priori through the filtered LES results to illustrate their limitations. The proposed parameterizations are tested a posteriori in idealized simulations of turbulent dispersion of a passive scalar. Comparisons show improved horizontal dispersion by the proposed schemes, and further demonstrate the weakness of the current schemes.

  3. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    Science.gov (United States)

    Sarlak, Hamid

    2017-05-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60,000 and simulations have been performed to primarily investigate the role of sub-grid scale (SGS) modeling on the dynamics of flow generated over the airfoil, which has not been dealt with in great detail in the past. It is seen that simulations are increasingly getting influenced by SGS modeling with increasing the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit LES offers closest pressure distribution predictions compared with literature.

  4. Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric; Duraisamy, Karthk

    2017-11-01

    The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  5. Final Report. Evaluating the Climate Sensitivity of Dissipative Subgrid-Scale Mixing Processes and Variable Resolution in NCAR's Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-12-14

    The goals of this project were to (1) assess and quantify the sensitivity and scale-dependency of unresolved subgrid-scale mixing processes in NCAR’s Community Earth System Model (CESM), and (2) to improve the accuracy and skill of forthcoming CESM configurations on modern cubed-sphere and variable-resolution computational grids. The research thereby contributed to the description and quantification of uncertainties in CESM’s dynamical cores and their physics-dynamics interactions.

  6. A Dynamic Subgrid Scale Model for Large Eddy Simulations Based on the Mori-Zwanzig Formalism

    CERN Document Server

    Parish, Eric J

    2016-01-01

    The development of reduced models for complex systems that lack scale separation remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a methodology for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a novel M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the `dynamic-$\\tau$' model, is that it is parameter-free and has a structural form imp...

  7. Simple lattice Boltzmann subgrid-scale model for convectional flows with high Rayleigh numbers within an enclosed circular annular cavity

    Science.gov (United States)

    Chen, Sheng; Tölke, Jonas; Krafczyk, Manfred

    2009-08-01

    Natural convection within an enclosed circular annular cavity formed by two concentric vertical cylinders is of fundamental interest and practical importance. Generally, the assumption of axisymmetric thermal flow is adopted for simulating such natural convections and the validity of the assumption of axisymmetric thermal flow is still held even for some turbulent convection. Usually the Rayleigh numbers (Ra) of realistic flows are very high. However, the work to design suitable and efficient lattice Boltzmann (LB) models on such flows is quite rare. To bridge the gap, in this paper a simple LB subgrid-scale (SGS) model, which is based on our recent work [S. Chen, J. Tölke, and M. Krafczyk, Phys. Rev. E 79, 016704 (2009); S. Chen, J. Tölke, S. Geller, and M. Krafczyk, Phys. Rev. E 78, 046703 (2008)], is proposed for simulating convectional flow with high Ra within an enclosed circular annular cavity. The key parameter for the SGS model can be quite easily and efficiently evaluated by the present model. The numerical experiments demonstrate that the present model works well for a large range of Ra and Prandtl number (Pr). Though in the present study a popularly used static Smagorinsky turbulence model is adopted to demonstrate how to develop a LB SGS model for simulating axisymmetric thermal flows with high Ra, other state-of-the-art turbulence models can be incorporated into the present model in the same way. In addition, the present model can be extended straightforwardly to simulate other axisymmetric convectional flows with high Ra, for example, turbulent convection with internal volumetric heat generation in a vertical cylinder, which is an important simplified representation of a nuclear reactor.

  8. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  9. On the Effect of an Anisotropy-Resolving Subgrid-Scale Model on Turbulent Vortex Motions

    Science.gov (United States)

    2014-09-19

    expression coincides with the modified Leonard stress proposed by Ger- mano et al. (1991). In this model, the SGS turbulence energy kSGS may be evaluated as... mano subgridscale closure method. Phys. Fluids A, Vol. 4, pp. 633-635. Morinishi, Y. and Vasilyev, O.V. (2001), A recommended modification to the

  10. Numerical Dissipation and Subgrid Scale Modeling for Separated Flows at Moderate Reynolds Numbers

    Science.gov (United States)

    Cadieux, Francois; Domaradzki, Julian Andrzej

    2014-11-01

    Flows in rotating machinery, for unmanned and micro aerial vehicles, wind turbines, and propellers consist of different flow regimes. First, a laminar boundary layer is followed by a laminar separation bubble with a shear layer on top of it that experiences transition to turbulence. The separated turbulent flow then reattaches and evolves downstream from a nonequilibrium turbulent boundary layer to an equilibrium one. In previous work, the capability of LES to reduce the resolution requirements down to 1 % of DNS resolution for such flows was demonstrated (Cadieux et al., JFE 136-6). However, under-resolved DNS agreed better with the benchmark DNS than simulations with explicit SGS modeling because numerical dissipation and filtering alone acted as a surrogate SGS dissipation. In the present work numerical viscosity is quantified using a new method proposed recently by Schranner et al. and its effects are analyzed and compared to turbulent eddy viscosities of explicit SGS models. The effect of different SGS models on a simulation of the same flow using a non-dissipative code is also explored. Supported by NSF.

  11. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Chumakov, Sergei [Los Alamos National Laboratory

    2008-01-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  12. Assessment of subgrid-scale models with a large-eddy simulation-dedicated experimental database: The pulsatile impinging jet in turbulent cross-flow

    Science.gov (United States)

    Baya Toda, Hubert; Cabrit, Olivier; Truffin, Karine; Bruneaux, Gilles; Nicoud, Franck

    2014-07-01

    Large-Eddy Simulation (LES) in complex geometries and industrial applications like piston engines, gas turbines, or aircraft engines requires the use of advanced subgrid-scale (SGS) models able to take into account the main flow features and the turbulence anisotropy. Keeping this goal in mind, this paper reports a LES-dedicated experiment of a pulsatile hot-jet impinging a flat-plate in the presence of a cold turbulent cross-flow. Unlike commonly used academic test cases, this configuration involves different flow features encountered in complex configurations: shear/rotating regions, stagnation point, wall-turbulence, and the propagation of a vortex ring along the wall. This experiment was also designed with the aim to use quantitative and nonintrusive optical diagnostics such as Particle Image Velocimetry, and to easily perform a LES involving a relatively simple geometry and well-controlled boundary conditions. Hence, two eddy-viscosity-based SGS models are investigated: the dynamic Smagorinsky model [M. Germano, U. Piomelli, P. Moin, and W. Cabot, "A dynamic subgrid-scale eddy viscosity model," Phys. Fluids A 3(7), 1760-1765 (1991)] and the σ-model [F. Nicoud, H. B. Toda, O. Cabrit, S. Bose, and J. Lee, "Using singular values to build a subgrid-scale model for large eddy simulations," Phys. Fluids 23(8), 085106 (2011)]. Both models give similar results during the first phase of the experiment. However, it was found that the dynamic Smagorinsky model could not accurately predict the vortex-ring propagation, while the σ-model provides a better agreement with the experimental measurements. Setting aside the implementation of the dynamic procedure (implemented here in its simplest form, i.e., without averaging over homogeneous directions and with clipping of negative values to ensure numerical stability), it is suggested that the mitigated predictions of the dynamic Smagorinsky model are due to the dynamic constant, which strongly depends on the mesh resolution

  13. An explicit relaxation filtering framework based upon Perona-Malik anisotropic diffusion for shock capturing and subgrid scale modeling of Burgers turbulence

    CERN Document Server

    Maulik, Romit

    2016-01-01

    In this paper, we introduce a relaxation filtering closure approach to account for subgrid scale effects in explicitly filtered large eddy simulations using the concept of anisotropic diffusion. We utilize the Perona-Malik diffusion model and demonstrate its shock capturing ability and spectral performance for solving the Burgers turbulence problem, which is a simplified prototype for more realistic turbulent flows showing the same quadratic nonlinearity. Our numerical assessments present the behavior of various diffusivity functions in conjunction with a detailed sensitivity analysis with respect to the free modeling parameters. In comparison to direct numerical simulation (DNS) and under-resolved DNS results, we find that the proposed closure model is efficient in the prevention of energy accumulation at grid cut-off and is also adept at preventing any possible spurious numerical oscillations due to shock formation under the optimal parameter choices. In contrast to other relaxation filtering approaches, it...

  14. Subgrid-scale turbulence in shock-boundary layer flows

    Science.gov (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad

    2015-04-01

    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  15. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent

    2016-05-01

    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  16. Subgrid Modeling Geomorphological and Ecological Processes in Salt Marsh Evolution

    Science.gov (United States)

    Shi, F.; Kirby, J. T., Jr.; Wu, G.; Abdolali, A.; Deb, M.

    2016-12-01

    Numerical modeling a long-term evolution of salt marshes is challenging because it requires an extensive use of computational resources. Due to the presence of narrow tidal creeks, variations of salt marsh topography can be significant over spatial length scales on the order of a meter. With growing availability of high-resolution bathymetry measurements, like LiDAR-derived DEM data, it is increasingly desirable to run a high-resolution model in a large domain and for a long period of time to get trends of sedimentation patterns, morphological change and marsh evolution. However, high spatial-resolution poses a big challenge in both computational time and memory storage, when simulating a salt marsh with dimensions of up to O(100 km^2) with a small time step. In this study, we have developed a so-called Pre-storage, Sub-grid Model (PSM, Wu et al., 2015) for simulating flooding and draining processes in salt marshes. The simulation of Brokenbridge salt marsh, Delaware, shows that, with the combination of the sub-grid model and the pre-storage method, over 2 orders of magnitude computational speed-up can be achieved with minimal loss of model accuracy. We recently extended PSM to include a sediment transport component and models for biomass growth and sedimentation in the sub-grid model framework. The sediment transport model is formulated based on a newly derived sub-grid sediment concentration equation following Defina's (2000) area-averaging procedure. Suspended sediment transport is modeled by the advection-diffusion equation in the coarse grid level, but the local erosion and sedimentation rates are integrated over the sub-grid level. The morphological model is based on the existing morphological model in NearCoM (Shi et al., 2013), extended to include organic production from the biomass model. The vegetation biomass is predicted by a simple logistic equation model proposed by Marani et al. (2010). The biomass component is loosely coupled with hydrodynamic and

  17. High-Resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bogenschutz, Peter [National Center for Atmospheric Research, Boulder, CO (United States); Moeng, Chin-Hoh [National Center for Atmospheric Research, Boulder, CO (United States)

    2015-10-13

    The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.

  18. Intercomparison of different subgrid-scale models for the Large Eddy Simulation of the diurnal evolution of the atmospheric boundary layer during the Wangara experiment

    Science.gov (United States)

    Dall'Ozzo, C.; Carissimo, B.; Musson-Genon, L.; Dupont, E.; Milliez, M.

    2012-04-01

    The study of a whole diurnal cycle of the atmospheric boundary layer evolving through unstable, neutral and stable states is essential to test a model applicable to the dispersion of pollutants. Consequently a LES of a diurnal cycle is performed and compared to observations from the Wangara experiment (Day 33-34). All simulations are done with Code_Saturne [1] an open source CFD code. The synthetic eddy method (SEM) [2] is implemented to initialize turbulence at the beginning of the simulation. Two different subgrid-scale (SGS) models are tested: the Smagorinsky model [3],[4] and the dynamical Wong and Lilly model [5]. The first one, the most classical, uses a Smagorinsky constant Cs to parameterize the dynamical turbulent viscosity while the second one relies on a variable C. Cs remains insensitive to the atmospheric stability level in contrary to the parameter C determined by the Wong and Lilly model. It is based on the error minimization of the difference between the tensors of the resolved turbulent stress (Lij) and the difference of the SGS stress tensors at two different filter scales (Mij). Furthermore, the thermal eddy diffusivity, as opposed to the Smagorinsky model, is calculated with a dynamical Prandtl number determination. The results are confronted to previous simulations from Basu et al. (2008) [6], using a locally averaged scale-dependent dynamic (LASDD) SGS model, and to previous RANS simulations. The accuracy in reproducing the experimental atmospheric conditions is discussed, especially regarding the night time low-level jet formation. In addition, the benefit of the utilization of a coupled radiative model is discussed.

  19. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.

    2017-01-05

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  20. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model.

    Science.gov (United States)

    Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin

    2016-07-01

    Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Advanced subgrid-scale modeling for convection-dominated species transport at fluid interfaces with application to mass transfer from rising bubbles

    Science.gov (United States)

    Weiner, Andre; Bothe, Dieter

    2017-10-01

    This paper presents a novel subgrid scale (SGS) model for simulating convection-dominated species transport at deformable fluid interfaces. One possible application is the Direct Numerical Simulation (DNS) of mass transfer from rising bubbles. The transport of a dissolving gas along the bubble-liquid interface is determined by two transport phenomena: convection in streamwise direction and diffusion in interface normal direction. The convective transport for technical bubble sizes is several orders of magnitude higher, leading to a thin concentration boundary layer around the bubble. A true DNS, fully resolving hydrodynamic and mass transfer length scales results in infeasible computational costs. Our approach is therefore a DNS of the flow field combined with a SGS model to compute the mass transfer between bubble and liquid. An appropriate model-function is used to compute the numerical fluxes on all cell faces of an interface cell. This allows to predict the mass transfer correctly even if the concentration boundary layer is fully contained in a single cell layer around the interface. We show that the SGS-model reduces the resolution requirements at the interface by a factor of ten and more. The integral flux correction is also applicable to other thin boundary layer problems. Two flow regimes are investigated to validate the model. A semi-analytical solution for creeping flow is used to assess local and global mass transfer quantities. For higher Reynolds numbers ranging from Re = 100 to Re = 460 and Péclet numbers between Pe =104 and Pe = 4 ṡ106 we compare the global Sherwood number against correlations from literature. In terms of accuracy, the predicted mass transfer never deviates more than 4% from the reference values.

  2. The subgrid-scale scalar variance under supercritical pressure conditions

    Science.gov (United States)

    Masi, Enrica; Bellan, Josette

    2011-08-01

    To model the subgrid-scale (SGS) scalar variance under supercritical-pressure conditions, an equation is first derived for it. This equation is considerably more complex than its equivalent for atmospheric-pressure conditions. Using a previously created direct numerical simulation (DNS) database of transitional states obtained for binary-species systems in the context of temporal mixing layers, the activity of terms in this equation is evaluated, and it is found that some of these new terms have magnitude comparable to that of governing terms in the classical equation. Most prominent among these new terms are those expressing the variation of diffusivity with thermodynamic variables and Soret terms having dissipative effects. Since models are not available for these new terms that would enable solving the SGS scalar variance equation, the adopted strategy is to directly model the SGS scalar variance. Two models are investigated for this quantity, both developed in the context of compressible flows. The first one is based on an approximate deconvolution approach and the second one is a gradient-like model which relies on a dynamic procedure using the Leonard term expansion. Both models are successful in reproducing the SGS scalar variance extracted from the filtered DNS database, and moreover, when used in the framework of a probability density function (PDF) approach in conjunction with the β-PDF, they excellently reproduce a filtered quantity which is a function of the scalar. For the dynamic model, the proportionality coefficient spans a small range of values through the layer cross-stream coordinate, boding well for the stability of large eddy simulations using this model.

  3. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  4. Evapotranspiration and cloud variability at regional sub-grid scales

    Science.gov (United States)

    Vila-Guerau de Arellano, Jordi; Sikma, Martin; Pedruzo-Bagazgoitia, Xabier; van Heerwaarden, Chiel; Hartogensis, Oscar; Ouwersloot, Huug

    2017-04-01

    In regional and global models uncertainties arise due to our incomplete understanding of the coupling between biochemical and physical processes. Representing their impact depends on our ability to calculate these processes using physically sound parameterizations, since they are unresolved at scales smaller than the grid size. More specifically over land, the coupling between evapotranspiration, turbulent transport of heat and moisture, and clouds lacks a combined representation to take these sub-grid scales interactions into account. Our approach is based on understanding how radiation, surface exchange, turbulent transport and moist convection are interacting from the leaf- to the cloud scale. We therefore place special emphasis on plant stomatal aperture as the main regulator of CO2-assimilation and water transpiration, a key source of moisture source to the atmosphere. Plant functionality is critically modulated by interactions with atmospheric conditions occurring at very short spatiotemporal scales such as cloud radiation perturbations or water vapour turbulent fluctuations. By explicitly resolving these processes, the LES (large-eddy simulation) technique is enabling us to characterize and better understand the interactions between canopies and the local atmosphere. This includes the adaption time of vegetation to rapid changes in atmospheric conditions driven by turbulence or the presence of cumulus clouds. Our LES experiments are based on explicitly coupling the diurnal atmospheric dynamics to a plant physiology model. Our general hypothesis is that different partitioning of direct and diffuse radiation leads to different responses of the vegetation. As a result there are changes in the water use efficiencies and shifts in the partitioning of sensible and latent heat fluxes under the presence of clouds. Our presentation is as follows. First, we discuss the ability of LES to reproduce the surface energy balance including photosynthesis and CO2 soil

  5. Predicting the impacts of fishing canals on Floodplain Dynamics in Northern Cameroon using a small-scale sub-grid hydraulic model

    Science.gov (United States)

    Shastry, A. R.; Durand, M. T.; Fernandez, A.; Hamilton, I.; Kari, S.; Labara, B.; Laborde, S.; Mark, B. G.; Moritz, M.; Neal, J. C.; Phang, S. C.

    2015-12-01

    Modeling Regime Shifts in the Logone floodplain (MORSL) is an ongoing interdisciplinary project at The Ohio State University studying the ecological, social and hydrological system of the region. This floodplain, located in Northern Cameroon, is part of the Lake Chad basin. Between September and October the floodplain is inundated by the overbank flow from the Logone River, which is important for agriculture and fishing. Fishermen build canals to catch fish during the flood's recession to the river by installing fishnets at the intersection of the canals and the river. Fishing canals thus connect the river to natural depressions of the terrain, which act as seasonal ponds during this part of the year. Annual increase in the number of canals affect hydraulics and hence fishing in the region. In this study, the Bara region (1 km2) of the Logone floodplain, through which Lorome Mazra flows, is modeled using LISFLOOD-FP, a raster-based model with sub-grid parameterizations of canals. The aim of the study is to find out how the small-scale, local features like canals and fishnets govern the flow, so that it can be incorporated in a large-scale model of the floodplain at a coarser spatial resolution. We will also study the effect of increasing number of canals on the flooding pattern. We use a simplified version of the hydraulic system at a grid-cell size of 30-m, using synthetic topography, parameterized fishing canals, and representing fishnets as trash screens. The inflow at Bara is obtained from a separate, lower resolution (1-km grid-cell) model run, which is forced by daily discharge records obtained from Katoa, located about 25-km to the south of Bara. The model appropriately captures the rise and recession of the annual flood, supporting use of the LISFLOOD-FP approach. Predicted water levels at specific points in the river, the canals, the depression and the floodplain will be compared to field measured heights of flood recession in Bara, November 2014.

  6. Importance of subgrid-scale parameterization in numerical simulations of lake circulation

    Science.gov (United States)

    Wang, Yongqi

    Two subgrid-scale modeling techniques--Smagorinsky's postulation for the horizontal eddy viscosity and the Mellor-Yamada level-2 model for the vertical eddy viscosity--are applied as turbulence closure conditions to numerical simulations of resolved-scale baroclinic lake circulations. The use of the total variation diminishing (TVD) technique in the numerical treatment of the advection terms in the governing equations depresses numerical diffusion to an acceptably low level and makes stable numerical performances possible with small eddy viscosities resulting from the turbulence closure parameterizations. The results show that, with regard to the effect of an external wind stress, the vertical turbulent mixing is mainly restricted to the topmost epilimnion with the order of magnitude for the vertical eddy viscosity of 10 -3 m 2 s -1, whilst the horizontal turbulent mixing may reach a somewhat deeper zone with an order of magnitude for the horizontal eddy viscosity of 0.1-1 m 2 s -1. Their spatial and temporal variations and influences on numerical results are significant. A comparison with prescribed constant eddy viscosities clearly shows the importance of subgrid-scale closures on resolved-scale flows in the lake circulation simulation. A predetermination of the eddy viscosities is inappropriate and should be abandoned. Their values must be determined by suitable subgrid-scale closure techniques.

  7. Evaluation of Subgrid-Scale Transport of Hydrometeors in a PDF-based Scheme using High-Resolution CRM Simulations

    Science.gov (United States)

    Wong, M.; Ovchinnikov, M.; Wang, M.; Larson, V. E.

    2014-12-01

    In current climate models, the model resolution is too coarse to explicitly resolve deep convective systems. Parameterization schemes are therefore needed to represent the physical processes at the sub-grid scale. Recently, an approach based on assumed probability density functions (PDFs) has been developed to help unify the various parameterization schemes used in current global models. In particular, a unified parameterization scheme called the Cloud Layers Unified By Binormals (CLUBB) scheme has been developed and tested successfully for shallow boundary-layer clouds. CLUBB's implementation in the Community Atmosphere Model, version 5 (CAM5) is also being extended to treat deep convection cases, but parameterizing subgrid-scale vertical transport of hydrometeors remains a challenge. To investigate the roots of the problem and possible solutions, we generate a high-resolution benchmark simulation of a deep convection case using a cloud-resolving model (CRM) called System for Atmospheric Modeling (SAM). We use the high-resolution 3D CRM results to assess the prognostic and diagnostic higher-order moments in CLUBB that are in relation to the subgrid-scale transport of hydrometeors. We also analyze the heat and moisture budgets in terms of CLUBB variables from the SAM benchmark simulation. The results from this study will be used to devise a better representation of vertical subgrid-scale transport of hydrometeors by utilizing the sub-grid variability information from CLUBB.

  8. A new downscaling method for sub-grid turbulence modeling

    Directory of Open Access Journals (Sweden)

    L. Rottner

    2017-06-01

    Full Text Available In this study we explore a new way to model sub-grid turbulence using particle systems. The ability of particle systems to model small-scale turbulence is evaluated using high-resolution numerical simulations. These high-resolution data are averaged to produce a coarse-grid velocity field, which is then used to drive a complete particle-system-based downscaling. Wind fluctuations and turbulent kinetic energy are compared between the particle simulations and the high-resolution simulation. Despite the simplicity of the physical model used to drive the particles, the results show that the particle system is able to represent the average field. It is shown that this system is able to reproduce much finer turbulent structures than the numerical high-resolution simulations. In addition, this study provides an estimate of the effective spatial and temporal resolution of the numerical models. This highlights the need for higher-resolution simulations in order to evaluate the very fine turbulent structures predicted by the particle systems. Finally, a study of the influence of the forcing scale on the particle system is presented.

  9. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  10. Parameterization of subgrid plume dilution for use in large-scale atmospheric simulations

    Directory of Open Access Journals (Sweden)

    A. D. Naiman

    2010-03-01

    Full Text Available A new model of plume dynamics has been developed for use as a subgrid model of plume dilution in a large-scale atmospheric simulation. The model uses mean wind, shear, and diffusion parameters derived from the local large-scale variables to advance the plume cross-sectional shape and area in time. Comparisons with a large eddy simulation of aircraft emission plume dynamics, with an analytical solution to the dynamics of a sheared Gaussian plume, and with measurements of aircraft exhaust plume dilution at cruise altitude show good agreement with these previous studies. We argue that the model also provides a reasonable approximation of line-shaped contrail dilution and give an example of how it can be applied in a global climate model.

  11. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    Science.gov (United States)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a

  12. Unsteady Flame Embedding (UFE) Subgrid Model for Turbulent Premixed Combustion Simulations

    KAUST Repository

    El-Asrag, Hossam

    2010-01-04

    We present a formulation for an unsteady subgrid model for premixed combustion in the flamelet regime. Since chemistry occurs at the unresolvable scales, it is necessary to introduce a subgrid model that accounts for the multi-scale nature of the problem using the information available on the resolved scales. Most of the current models are based on the laminar flamelet concept, and often neglect the unsteady effects. The proposed model\\'s primary objective is to encompass many of the flame/turbulence interactions unsteady features and history effects. In addition it provides a dynamic and accurate approach for computing the subgrid flame propagation velocity. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames. A set of elemental one dimensional flames is used to describe the turbulent flame structure at the subgrid level. The stretched flame calculations are performed on the stagnation line of a strained flame using the unsteady filtered strain rate computed from the resolved- grid. The flame iso-surface is tracked using an accurate high-order level set formulation to propagate the flame interface at the coarse resolution with minimum numerical diffusion. In this paper the solver and the model components are introduced and used to investigate two unsteady flames with different Lewis numbers in the thin reaction zone regime. The results show that the UFE model captures the unsteady flame-turbulence interactions and the flame propagation speed reasonably well. Higher propagation speed is observed for the lower than unity Lewis number flame because of the impact of differential diffusion.

  13. A distributed Grid-Xinanjiang model with integration of subgrid variability of soil storage capacity

    Directory of Open Access Journals (Sweden)

    Wei-jian Guo

    2016-04-01

    Full Text Available Realistic hydrological response is sensitive to the spatial variability of landscape properties. For a grid-based distributed rainfall-runoff model with a hypothesis of a uniform grid, the high-frequency information within a grid cell will be gradually lost as the resolution of the digital elevation model (DEM grows coarser. Therefore, the performance of a hydrological model is usually scale-dependent. This study used the Grid-Xinanjiang (GXAJ model as an example to investigate the effects of subgrid variability on hydrological response at different scales. With the aim of producing a more reasonable hydrological response and spatial description of the landscape properties, a new distributed rainfall-runoff model integrating the subgrid variability (the GXAJSV model was developed. In this model, the topographic index is used as an auxiliary variable correlated with the soil storage capacity. The incomplete beta distribution is suggested for simulating the probability distribution of the soil storage capacity within the raster grid. The Yaogu Basin in China was selected for model calibration and validation at different spatial scales. Results demonstrated that the proposed model can effectively eliminate the scale dependence of the GXAJ model and produce a more reasonable hydrological response.

  14. A Fast and Accurate Scheme for Sea Ice Dynamics with a Stochastic Subgrid Model

    Science.gov (United States)

    Seinen, C.; Khouider, B.

    2016-12-01

    Sea ice physics is a very complex process occurring over a wide range of scales; such as local melting or large scale drift. At the current grid resolution of Global Climate Models (GCMs), we are able to resolve large scale sea ice dynamics but uncertainty remains due to subgrid physics and potential dynamic feedback, especially due to the formation of melt ponds. Recent work in atmospheric science has shown success of Markov Jump stochastic subgrid models in the representation of clouds and convection and their feedback into the large scales. There has been a push to implement these methods in other parts of the Earth System and for the cryosphere in particular but in order to test these methods, efficient and accurate solvers are required for the resolved large scale sea-ice dynamics. We present a second order accurate scheme, in both time and space, for the sea ice momentum equation (SIME) with a Jacobian Free Newton Krylov (JFNK) solver. SIME is a highly nonlinear equation due to sea ice rheology terms appearing in the stress tensor. The most commonly accepted formulation, introduced by Hibler, allows sea-ice to resist significant stresses in compression but significantly less in tension. The relationship also leads to large changes in internal stresses from small changes in velocity fields. These non-linearities have resulted in the use of implicit methods for SIME and a JFNK solver was recently introduced and used to gain efficiency. However, the method used so far is only first order accurate in time. Here we expand the JFNK approach to a Crank-Nicholson discretization of SIME. This fully second order scheme is achieved with no increase in computational cost and will allow efficient testing and development of subgrid stochastic models of sea ice in the near future.

  15. The Sensitivity of Simulated Competition Between Different Plant Functional Types to Subgrid Scale Representation of Vegetation in a Land Surface Model

    Science.gov (United States)

    Shrestha, R. K.; Arora, V.; Melton, J. R.

    2014-12-01

    Vegetation is a dynamic component of the earth system that affects weather and climate at hourly to centennial time scales. However, most current dynamic vegetation models do not explicitly simulate competition among Plant Functional Types (PFTs). Here we use the coupled CLASS-CTEM model (Canadian Land Surface Scheme coupled to Canadian Terrestrial Ecosystem Model) to explicitly simulate competition between nine PFTs for available space using a modified version of Lotka - Volterra (LV) predator-prey equations. The nine PFTs include evergreen and deciduous needleleaf trees, evergreen and cold and drought deciduous broadleaf trees and C3 and C4 crops and grasses. The CLASS-CTEM model can be configured either in the composite (single tile) or the mosaic (multiple tiles) mode. Our results show that the model is sensitive to the chosen mode. The simulated fractional coverage of PFTs are similar between two approaches at some locations whereas at the other locations the two approaches yield different results. The simulated fractional coverage of PFTs are also compared with the available observations-based estimates. Simulated results at selected locations across the globe show that the model is able to realistically simulate the fractional coverage of tree and grass PFTs and the bare fraction, as well as the fractional coverage of individual tree and grass PFTs. Along with the observed patterns of vegetation distribution the CLASS-CTEM modelling framework is also able to simulate realistic succession patterns. Some differences remain and these are attributed to the coarse spatial resolution of the model (~3.75°) and the limited number of PFTs represented in the model.

  16. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  17. Collaborative Project: High-resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science

    2015-11-01

    We proposed to implement, test, and evaluate recently developed turbulence parameterizations, using a wide variety of methods and modeling frameworks together with observations including ARM data. We have successfully tested three different turbulence parameterizations in versions of the Community Atmosphere Model: CLUBB, SHOC, and IPHOC. All three produce significant improvements in the simulated climate. CLUBB will be used in CAM6, and also in ACME. SHOC is being tested in the NCEP forecast model. In addition, we have achieved a better understanding of the strengths and limitations of the PDF-based parameterizations of turbulence and convection.

  18. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  19. A nonlinear structural subgrid-scale closure for compressible MHD Part II: a priori comparison on turbulence simulation data

    CERN Document Server

    Grete, P; Schmidt, W; Schleicher, D R G

    2016-01-01

    Even though compressible plasma turbulence is encountered in many astrophysical phenomena, its effect is often not well understood. Furthermore, direct numerical simulations are typically not able to reach the extreme parameters of these processes. For this reason, large-eddy simulations (LES), which only simulate large and intermediate scales directly, are employed. The smallest, unresolved scales and the interactions between small and large scales are introduced by means of a subgrid-scale (SGS) model. We propose and verify a new set of nonlinear SGS closures for future application as an SGS model in LES of compressible magnetohydrodynamics (MHD). We use 15 simulations (without explicit SGS model) of forced, isotropic, homogeneous turbulence with varying sonic Mach number $\\mathrm{M_s} = 0.2$ to $20$ as reference data for the most extensive \\textit{a priori} tests performed so far in literature. In these tests we explicitly filter the reference data and compare the performance of the new closures against th...

  20. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    Science.gov (United States)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  1. Convective kinetic energy equation under the mass-flux subgrid-scale parameterization

    Science.gov (United States)

    Yano, Jun-Ichi

    2015-03-01

    The present paper originally derives the convective kinetic energy equation under mass-flux subgrid-scale parameterization in a formal manner based on the segmentally-constant approximation (SCA). Though this equation is long since presented by Arakawa and Schubert (1974), a formal derivation is not known in the literature. The derivation of this formulation is of increasing interests in recent years due to the fact that it can explain basic aspects of the convective dynamics such as discharge-recharge and transition from shallow to deep convection. The derivation is presented in two manners: (i) for the case that only the vertical component of the velocity is considered and (ii) the case that both the horizontal and vertical components are considered. The equation reduces to the same form as originally presented by Arakwa and Schubert in both cases, but with the energy dissipation term defined differently. In both cases, nevertheless, the energy "dissipation" (loss) term consists of the three principal contributions: (i) entrainment-detrainment, (ii) outflow from top of convection, and (iii) pressure effects. Additionally, inflow from the bottom of convection contributing to a growth of convection is also formally counted as a part of the dissipation term. The eddy dissipation is also included for a completeness. The order-of-magnitude analysis shows that the convective kinetic energy "dissipation" is dominated by the pressure effects, and it may be approximately described by Rayleigh damping with a constant time scale of the order of 102-103 s. The conclusion is also supported by a supplementary analysis of a cloud-resolving model (CRM) simulation. The Appendix discusses how the loss term ("dissipation") of the convective kinetic energy is qualitatively different from the conventional eddy-dissipation process found in turbulent flows.

  2. On the development of a subgrid CFD model for fire extinguishment

    Energy Technology Data Exchange (ETDEWEB)

    TIESZEN,SHELDON R.; LOPEZ,AMALIA R.

    2000-02-02

    A subgrid model is presented for use in CFD fire simulations to account for thermal suppressants and strain. The extinguishment criteria is based on the ratio of a local fluid-mechanics time-scale to a local chemical time-scale compared to an empirically-determined critical Damkohler number. Local extinction occurs if this time scale is exceeded, global fire extinguishment occurs when local extinction has occurred for all combusting cells. The fluid mechanics time scale is based on the Kolmogorov time scale and the chemical time scale is based on blowout of a perfectly stirred reactor. The input to the reactor is based on cell averaged temperatures, assumed stoichiometric fuel/air composition, and cell averaged suppressant concentrations including combustion products. A detailed chemical mechanism is employed. The chemical time-scale is precalculated and mixing rules are used to reduce the composition space that must be parameterized. Comparisons with experimental data for fire extinguishment in a flame-stabilizing, backward-facing step geometry indicates that the model is conservative for this condition.

  3. Acceleration of inertial particles in wall bounded flows: DNS and LES with stochastic modelling of the subgrid acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Zamansky, Remi; Vinkovic, Ivana; Gorokhovski, Mikhael, E-mail: ivana.vinkovic@univ-lyonl.fr [Laboratoire de Mecanique des Fluides et d' Acoustique CNRS UMR 5509 Ecole Centrale de Lyon, 36, av. Guy de Collongue, 69134 Ecully Cedex (France)

    2011-12-22

    Inertial particle acceleration statistics are analyzed using DNS for turbulent channel flow. Along with effects recognized in homogeneous isotropic turbulence, an additional effect is observed due to high and low speed vortical structures aligned with the channel wall. In response to those structures, particles with moderate inertia experience strong longitudinal acceleration variations. DNS is also used in order to assess LES-SSAM (Subgrid Stochastic Acceleration Model), in which an approximation to the instantaneous non-filtered velocity field is given by simulation of both, filtered and residual, accelerations. This approach allow to have access to the intermittency of the flow at subgrid scale. Advantages of LES-SSAM in predicting particle dynamics in the channel flow at a high Reynolds number are shown.

  4. Combination of Lidar Elevations, Bathymetric Data, and Urban Infrastructure in a Sub-Grid Model for Predicting Inundation in New York City during Hurricane Sandy

    CERN Document Server

    Loftis, Jon Derek; Hamilton, Stuart E; Forrest, David R

    2014-01-01

    We present the geospatial methods in conjunction with results of a newly developed storm surge and sub-grid inundation model which was applied in New York City during Hurricane Sandy in 2012. Sub-grid modeling takes a novel approach for partial wetting and drying within grid cells, eschewing the conventional hydrodynamic modeling method by nesting a sub-grid containing high-resolution lidar topography and fine scale bathymetry within each computational grid cell. In doing so, the sub-grid modeling method is heavily dependent on building and street configuration provided by the DEM. The results of spatial comparisons between the sub-grid model and FEMA's maximum inundation extents in New York City yielded an unparalleled absolute mean distance difference of 38m and an average of 75% areal spatial match. An in-depth error analysis reveals that the modeled extent contour is well correlated with the FEMA extent contour in most areas, except in several distinct areas where differences in special features cause sig...

  5. Enhancing Representation of Subgrid Land Surface Characteristics in the Community Land Model

    Science.gov (United States)

    Ke, Y.; Coleman, A.; Leung, L.; Huang, M.; Li, H.; Wigmosta, M. S.

    2011-12-01

    The Community Land Model (CLM) is the land surface model used in the Community Earth System Model (CESM). In CLM each grid cell is composed of subgrid land units, snow/soil columns and plant functional types (PFTs). In the current version of CLM (CLM4.0), land surface parameters such as vegetated/non-vegetated land cover and surface characteristics including fractional glacier, lake, wetland, urban area, and PFT, and its associated leaf area index (LAI), stem area index (SAI), and canopy top and bottom heights are provided at 0.5° or coarser resolution. This study aims to enhance the representation of the land surface data by (1) creating higher resolution (0.05° or higher) global land surface parameters, and (2) developing an effective and accurate subgrid classification scheme for elevation and PFTs so that variations of land surface processes due to the subgrid distribution of PFTs and elevation can be represented in CLM. To achieve higher-resolution global land surface parameters, MODIS 500m land cover product (MCD12Q1) collected in 2005 was used to generate percentage of glacier, lake, wetland, and urban area and fractional PFTs at 0.05° resolution. Spatially and temporally continuous and consistent global LAI data re-processed and improved from MOD15A2 (http://globalchange.bnu.edu.cn/research/lai), combined with the PFT data, was used to create LAI, SAI, and, canopy top and bottom height data. 30-second soil texture data was obtained from a hybrid 30-second State Soil Geographic Database (STATSGO) and the 5-minute Food and Agriculture Organization two-layer 16-category soil texture dataset. The relationship between global distribution of PFTs and 1-km resolution elevation data is being analyzed to develop a subgrid classification of PFT and elevation. Statistical analysis is being conducted to compare different subgrid classification methods to select a method that explains the highest percentage of subgrid variance in both PFT and elevation distribution

  6. Quantification of marine aerosol subgrid variability and its correlation with clouds based on high-resolution regional modeling: Quantifying Aerosol Subgrid Variability

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing; Qian, Yun; Yan, Huiping; Zhao, Chun; Ghan, Steven J.; Easter, Richard C.; Zhang, Kai

    2017-06-16

    One limitation of most global climate models (GCMs) is that with the horizontal resolutions they typically employ, they cannot resolve the subgrid variability (SGV) of clouds and aerosols, adding extra uncertainties to the aerosol radiative forcing estimation. To inform the development of an aerosol subgrid variability parameterization, here we analyze the aerosol SGV over the southern Pacific Ocean simulated by the high-resolution Weather Research and Forecasting model coupled to Chemistry. We find that within a typical GCM grid, the aerosol mass subgrid standard deviation is 15% of the grid-box mean mass near the surface on a 1 month mean basis. The fraction can increase to 50% in the free troposphere. The relationships between the sea-salt mass concentration, meteorological variables, and sea-salt emission rate are investigated in both the clear and cloudy portion. Under clear-sky conditions, marine aerosol subgrid standard deviation is highly correlated with the standard deviations of vertical velocity, cloud water mixing ratio, and sea-salt emission rates near the surface. It is also strongly connected to the grid box mean aerosol in the free troposphere (between 2 km and 4 km). In the cloudy area, interstitial sea-salt aerosol mass concentrations are smaller, but higher correlation is found between the subgrid standard deviations of aerosol mass and vertical velocity. Additionally, we find that decreasing the model grid resolution can reduce the marine aerosol SGV but strengthen the correlations between the aerosol SGV and the total water mixing ratio (sum of water vapor, cloud liquid, and cloud ice mixing ratios).

  7. Resolving terrestrial ecosystem processes along a subgrid topographic gradient for an earth-system model

    Science.gov (United States)

    Subin, Z M; Milly, Paul C.D.; Sulman, B N; Malyshev, Sergey; Shevliakova, E

    2014-01-01

    Soil moisture is a crucial control on surface water and energy fluxes, vegetation, and soil carbon cycling. Earth-system models (ESMs) generally represent an areal-average soil-moisture state in gridcells at scales of 50–200 km and as a result are not able to capture the nonlinear effects of topographically-controlled subgrid heterogeneity in soil moisture, in particular where wetlands are present. We addressed this deficiency by building a subgrid representation of hillslope-scale topographic gradients, TiHy (Tiled-hillslope Hydrology), into the Geophysical Fluid Dynamics Laboratory (GFDL) land model (LM3). LM3-TiHy models one or more representative hillslope geometries for each gridcell by discretizing them into land model tiles hydrologically coupled along an upland-to-lowland gradient. Each tile has its own surface fluxes, vegetation, and vertically-resolved state variables for soil physics and biogeochemistry. LM3-TiHy simulates a gradient in soil moisture and water-table depth between uplands and lowlands in each gridcell. Three hillslope hydrological regimes appear in non-permafrost regions in the model: wet and poorly-drained, wet and well-drained, and dry; with large, small, and zero wetland area predicted, respectively. Compared to the untiled LM3 in stand-alone experiments, LM3-TiHy simulates similar surface energy and water fluxes in the gridcell-mean. However, in marginally wet regions around the globe, LM3-TiHy simulates shallow groundwater in lowlands, leading to higher evapotranspiration, lower surface temperature, and higher leaf area compared to uplands in the same gridcells. Moreover, more than four-fold larger soil carbon concentrations are simulated globally in lowlands as compared with uplands. We compared water-table depths to those simulated by a recent global model-observational synthesis, and we compared wetland and inundated areas diagnosed from the model to observational datasets. The comparisons demonstrate that LM3-TiHy has the

  8. Parameterization for subgrid-scale motion of ice-shelf calving fronts

    Directory of Open Access Journals (Sweden)

    T. Albrecht

    2011-01-01

    Full Text Available A parameterization for the motion of ice-shelf fronts on a Cartesian grid in finite-difference land-ice models is presented. The scheme prevents artificial thinning of the ice shelf at its edge, which occurs due to the finite resolution of the model. The intuitive numerical implementation diminishes numerical dispersion at the ice front and enables the application of physical boundary conditions to improve the calculation of stress and velocity fields throughout the ice-sheet-shelf system. Numerical properties of this subgrid modification are assessed in the Potsdam Parallel Ice Sheet Model (PISM-PIK for different geometries in one and two horizontal dimensions and are verified against an analytical solution in a flow-line setup.

  9. Effect of Considering Sub-Grid Scale Uncertainties on the Forecasts of a High-Resolution Limited Area Ensemble Prediction System

    Science.gov (United States)

    Kim, SeHyun; Kim, Hyun Mee

    2017-05-01

    The ensemble prediction system (EPS) is widely used in research and at operation center because it can represent the uncertainty of predicted atmospheric state and provide information of probabilities. The high-resolution (so-called "convection-permitting") limited area EPS can represent the convection and turbulence related to precipitation phenomena in more detail, but it is also much sensitive to small-scale or sub-grid scale processes. The convection and turbulence are represented using physical processes in the model and model errors occur due to sub-grid scale processes that were not resolved. This study examined the effect of considering sub-grid scale uncertainties using the high-resolution limited area EPS of the Korea Meteorological Administration (KMA). The developed EPS has horizontal resolution of 3 km and 12 ensemble members. The initial and boundary conditions were provided by the global model. The Random Parameters (RP) scheme was used to represent sub-grid scale uncertainties. The EPSs with and without the RP scheme were developed and the results were compared. During the one month period of July, 2013, a significant difference was shown in the spread of 1.5 m temperature and the Root Mean Square Error and spread of 10 m zonal wind due to application of the RP scheme. For precipitation forecast, the precipitation tended to be overestimated relative to the observation when the RP scheme was applied. Moreover, the forecast became more accurate for heavy precipitations and the longer forecast lead times. For two heavy rainfall cases occurred during the research period, the higher Equitable Threat Score was observed for heavy precipitations in the system with the RP scheme compared to the one without, demonstrating consistency with the statistical results for the research period. Therefore, the predictability for heavy precipitation phenomena that affected the Korean Peninsula increases if the RP scheme is used to consider sub-grid scale uncertainties

  10. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  11. A nonlinear structural subgrid-scale closure for compressible MHD. I. Derivation and energy dissipation properties

    Energy Technology Data Exchange (ETDEWEB)

    Vlaykov, Dimitar G., E-mail: Dimitar.Vlaykov@ds.mpg.de [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany); Grete, Philipp [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Schmidt, Wolfram [Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg (Germany); Schleicher, Dominik R. G. [Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C (Chile)

    2016-06-15

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However, the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LESs), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator [W. K. Yeo (CUP, 1993)] and require no assumptions about the nature of the flow or magnetic field. Thus, the scope of their applicability ranges from the sub- to the hyper-sonic and -Alfvénic regimes. The closures support spectral energy cascades both up and down-scale, as well as direct transfer between kinetic and magnetic resolved and unresolved energy budgets. They implicitly take into account the local geometry, and in particular, the anisotropy of the flow. Their properties are a priori validated in Paper II [P. Grete et al., Phys. Plasmas 23, 062317 (2016)] against alternative closures available in the literature with respect to a wide range of simulation data of homogeneous and isotropic turbulence.

  12. Analysis of subgrid models of heat convection by symmetry group theory

    Science.gov (United States)

    Razafindralandy, Dina; Hamdouni, Aziz

    2007-04-01

    Symmetries, i.e. transformations which leave the set of the solutions of the Navier-Stokes equations unchanged, play an important role in turbulence (conservation laws, wall laws, …). They should not be destroyed by turbulence models. The symmetries of the heat convection equations are then presented, for a non-isothermal fluid. Next, common subgrid stress tensor and flux models are analyzed, using the symmetry approach. To cite this article: D. Razafindralandy, A. Hamdouni, C. R. Mecanique 335 (2007).

  13. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang

    2014-03-01

    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  14. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    Science.gov (United States)

    Xiao, Heng; Gustafson, William I.; Hagos, Samson M.; Wu, Chien-Ming; Wan, Hui

    2015-06-01

    To better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km2. Then the ZM-based parameterization of vertical transport of moist static energy for scales smaller than the subdomain size (w'h'>¯ZM) are compared to those directly calculated from the CRM simulations (w'h'>¯CRM) for different subdomain sizes. The ensemble mean w'h'>¯CRM decreases by more than half as the subdomain size decreases from 128 to 8 km across while w'h'>¯ZM decreases with subdomain size only for strong convection cases and increases for weaker cases. The resolution dependence of w'h'>¯ZM is determined by the positive-definite grid-scale tendency of convective available potential energy (CAPE) in the convective quasi-equilibrium (QE) closure. Further analysis shows the actual grid-scale tendency of CAPE (before taking the positive definite value) and w'h'>¯CRM behave very similarly as the subdomain size changes because they are both tied to grid-scale advective tendencies. We can improve the resolution dependence of w'h'>¯ZM significantly by averaging the grid-scale tendency of CAPE over an appropriately large area surrounding each subdomain before taking its positive definite value. Even though the ensemble mean w'h'>¯CRM decreases with increasing resolution, its variability increases dramatically. w'h'>¯ZM cannot capture such increase in the variability, suggesting the need for stochastic treatment of convection at relatively high spatial resolution (8 or 16 km).

  15. Aerosol indirect effects in the ECHAM5-HAM2 climate model with subgrid cloud microphysics in a stochastic framework

    Science.gov (United States)

    Tonttila, Juha; Räisänen, Petri; Järvinen, Heikki

    2015-04-01

    Representing cloud properties in global climate models remains a challenging topic, which to a large extent is due to cloud processes acting on spatial scales much smaller than the typical model grid resolution. Several attempts have been made to alleviate this problem. One such method was introduced in the ECHAM5-HAM2 climate model by Tonttila et al. (2013), where cloud microphysical properties, along with the processes of cloud droplet activation and autoconversion, were computed using an ensemble of stochastic subcolumns within the climate model grid columns. Moreover, the subcolumns were sampled for radiative transfer using the Monte Carlo Independent Column Approximation approach. The same model version is used in this work (Tonttila et al. 2014), where 5-year nudged integrations are performed with a series of different model configurations. Each run is performed twice, once with pre-industrial (PI, year 1750) aerosol emission conditions and once with present-day (PD, year 2000) conditions, based on the AEROCOM emission inventories. The differences between PI and PD simulations are used to estimate the impact of anthropogenic aerosols on clouds and the aerosol indirect effect (AIE). One of the key results is that when both cloud activation and autoconversion are computed in the subcolumn space, the aerosol-induced PI-to-PD change in the global-mean liquid water path is up to 19 % smaller than in the reference with grid-scale computations. Together with similar changes in the cloud droplet number concentration, this influences the cloud radiative effects and thus the AIE, which is estimated as the difference in the net cloud radiative effect between PI and PD conditions. Accordingly, the AIE is reduced by 14 %, from 1.59 W m-2 in the reference model version to 1.37 W m-2 in the experimental model configuration. The results of this work explicitly show that careful consideration of the subgrid variability in cloud microphysical properties and consistent

  16. Assessment of sub-grid scale dispersion closure with regularized deconvolution method in a particle-laden turbulent jet

    Science.gov (United States)

    Wang, Qing; Zhao, Xinyu; Ihme, Matthias

    2017-11-01

    Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).

  17. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  18. A nonlinear structural subgrid-scale closure for compressible MHD Part I: derivation and energy dissipation properties

    CERN Document Server

    Vlaykov, Dimitar G; Schmidt, Wolfram; Schleicher, Dominik R G

    2016-01-01

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LES), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale (SGS) dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator (W.K. Yeo CUP 1993, ed. Galperin & Orszag) and require no assumptions about the nature of the flow or magnetic field. Thus the scope of their applicability ranges from the sub- to ...

  19. Modelling sub-grid wetland in the ORCHIDEE global land surface model: evaluation against river discharges and remotely sensed data

    Directory of Open Access Journals (Sweden)

    B. Ringeval

    2012-07-01

    Full Text Available The quality of the global hydrological simulations performed by land surface models (LSMs strongly depends on processes that occur at unresolved spatial scales. Approaches such as TOPMODEL have been developed, which allow soil moisture redistribution within each grid-cell, based upon sub-grid scale topography. Moreover, the coupling between TOPMODEL and a LSM appears as a potential way to simulate wetland extent dynamic and its sensitivity to climate, a recently identified research problem for biogeochemical modelling, including methane emissions. Global evaluation of the coupling between TOPMODEL and an LSM is difficult, and prior attempts have been indirect, based on the evaluation of the simulated river flow. This study presents a new way to evaluate this coupling, within the ORCHIDEE LSM, using remote sensing data of inundated areas. Because of differences in nature between the satellite derived information – inundation extent – and the variable diagnosed by TOPMODEL/ORCHIDEE – area at maximum soil water content, the evaluation focuses on the spatial distribution of these two quantities as well as on their temporal variation. Despite some difficulties in exactly matching observed localized inundated events, we obtain a rather good agreement in the distribution of these two quantities at a global scale. Floodplains are not accounted for in the model, and this is a major limitation. The difficulty of reproducing the year-to-year variability of the observed inundated area (for instance, the decreasing trend by the end of 90s is also underlined. Classical indirect evaluation based on comparison between simulated and observed river flow is also performed and underlines difficulties to simulate river flow after coupling with TOPMODEL. The relationship between inundation and river flow at the basin scale in the model is analyzed, using both methods (evaluation against remote sensing data and river flow. Finally, we discuss the potential of

  20. Sub-grid combustion modeling for compressible two-phase reacting flows

    Science.gov (United States)

    Sankaran, Vaidyanathan

    2003-06-01

    A generic formulation for modeling the turbulent combustion in compressible, high Reynolds number, two-phase; reacting flows has been developed and validated. A sub-grid mixing/combustion model called Linear Eddy Mixing (LEM) model has been extended to compressible flows and used inside the framework of Large Eddy Simulation (LES) in this LES-LEM approach. The LES-LEM approach is based on the proposition that the basic mechanistic distinction between the convective and the molecular effects should be preserved for accurate prediction of complex flow-fields such as those encountered in many combustion systems. Liquid droplets (represented by computational parcels) are tracked using the Lagrangian approach wherein the Newton's equation of motion for the discrete particles are integrated explicitly in the Eulerian gas field. The gas phase LES velocity fields are used to estimate the instantaneous gas velocity at the droplet location. Drag effects due to the droplets on the gas phase and the heat transfer between the gas and the liquid phase are explicitly included. Thus, full coupling is achieved between the two phases in the simulation. Validation of the compressible LES-LEM approach is conducted by simulating the flow-field in an operational General Electric Aircraft Engines combustor (LM6000). The results predicted using the proposed approach compares well with the experiments and a conventional (G-equation) thin-flame model. Particle tracking algorithms used in the present study are validated by simulating droplet laden temporal mixing layers. Quantitative and qualitative comparison with the results of spectral DNS exhibits good agreement. Simulations using the current LES-LEM for freely propagating partially premixed flame in a droplet-laden isotropic turbulent field correctly captures the flame structure in the partially premixed flames. Due to the strong spatial variation of equivalence ratio a broad flame similar to a premixed flame is realized. The current

  1. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.

    2017-02-15

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  2. From Detailed Description of Chemical Reacting Carbon Particles to Subgrid Models for CFD

    Directory of Open Access Journals (Sweden)

    Schulze S.

    2013-04-01

    Full Text Available This work is devoted to the development and validation of a sub-model for the partial oxidation of a spherical char particle moving in an air/steam atmosphere. The particle diameter is 2 mm. The coal particle is represented by moisture- and ash-free nonporous carbon while the coal rank is implemented using semi-global reaction rate expressions taken from the literature. The submodel includes six gaseous chemical species (O2, CO2, CO, H2O, H2, N2. Three heterogeneous reactions are employed, along with two homogeneous semi-global reactions, namely carbon monoxide oxidation and the water-gas-shift reaction. The distinguishing feature of the subgrid model is that it takes into account the influence of homogeneous reactions on integral characteristics such as carbon combustion rates and particle temperature. The sub-model was validated by comparing its results with a comprehensive CFD-based model resolving the issues of bulk flow and boundary layer around the particle. In this model, the Navier-Stokes equations coupled with the energy and species conservation equations were used to solve the problem by means of the pseudo-steady state approach. At the surface of the particle, the balance of mass, energy and species concentration was applied including the effect of the Stefan flow and heat loss due to radiation at the surface of the particle. Good agreement was achieved between the sub-model and the CFD-based model. Additionally, the CFD-based model was verified against experimental data published in the literature (Makino et al. (2003 Combust. Flame 132, 743-753. Good agreement was achieved between numerically predicted and experimentally obtained data for input conditions corresponding to the kinetically controlled regime. The maximal discrepancy (10% between the experiments and the numerical results was observed in the diffusion-controlled regime. Finally, we discuss the influence of the Reynolds number, the ambient O2 mass fraction and the ambient

  3. Final Report: Systematic Development of a Subgrid Scaling Framework to Improve Land Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dickinson, Robert Earl [Univ. of Texas, Austin, TX (United States)

    2016-07-11

    We carried out research to development improvements of the land component of climate models and to understand the role of land in climate variability and change. A highlight was the development of a 3D canopy radiation model. More than a dozen publications resulted.

  4. Model Validation for Propulsion - On the TFNS and LES Subgrid Models for a Bluff Body Stabilized Flame

    Science.gov (United States)

    Wey, Thomas

    2017-01-01

    With advances in computational power and availability of distributed computers, the use of even the most complex of turbulent chemical interaction models in combustors and coupled analysis of combustors and turbines is now possible and more and more affordable for realistic geometries. Recent more stringent emission standards have enticed the development of more fuel-efficient and low-emission combustion system for aircraft gas turbine applications. It is known that the NOx emissions tend to increase dramatically with increasing flame temperature. It is well known that the major difficulty, when modeling the turbulence-chemistry interaction, lies in the high non-linearity of the reaction rate expressed in terms of the temperature and species mass fractions. The transport filtered density function (FDF) model and the linear eddy model (LEM), which both use local instantaneous values of the temperature and mass fractions, have been shown to often provide more accurate results of turbulent combustion. In the present, the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models, LEM-like and EUPDF-like, capable of emulating the major processes occurring in the turbulence-chemistry interaction will be used to perform reacting flow simulations of a selected test case. The selected test case from the Volvo Validation Rig was documented by Sjunnesson.

  5. Simulation of subgrid orographic precipitation with an embedded 2-D cloud-resolving model

    Science.gov (United States)

    Jung, Joon-Hee; Arakawa, Akio

    2016-03-01

    By explicitly resolving cloud-scale processes with embedded two-dimensional (2-D) cloud-resolving models (CRMs), superparameterized global atmospheric models have successfully simulated various atmospheric events over a wide range of time scales. Up to now, however, such models have not included the effects of topography on the CRM grid scale. We have used both 3-D and 2-D CRMs to simulate the effects of topography with prescribed "large-scale" winds. The 3-D CRM is used as a benchmark. The results show that the mean precipitation can be simulated reasonably well by using a 2-D representation of topography as long as the statistics of the topography such as the mean and standard deviation are closely represented. It is also shown that the use of a set of two perpendicular 2-D grids can significantly reduce the error due to a 2-D representation of topography.

  6. A Subgrid Parameterization for Wind Turbines in Weather Prediction Models with an Application to Wind Resource Limits

    Directory of Open Access Journals (Sweden)

    B. H. Fiedler

    2014-01-01

    Full Text Available A subgrid parameterization is offered for representing wind turbines in weather prediction models. The parameterization models the drag and mixing the turbines cause in the atmosphere, as well as the electrical power production the wind causes in the wind turbines. The documentation of the parameterization is complete; it does not require knowledge of proprietary data of wind turbine characteristics. The parameterization is applied to a study of wind resource limits in a hypothetical giant wind farm. The simulated production density was found not to exceed 1 W m−2, peaking at a deployed capacity density of 5 W m−2 and decreasing slightly as capacity density increased to 20 W m−2.

  7. Impact of an additional radiative CO2 cooling induced by subgrid-scale gravity waves in the middle and upper atmosphere

    Science.gov (United States)

    Medvedev, A. S.; Yigit, E.; Kutepov, A.; Feofilov, A.

    2011-12-01

    Atmospheric fluctuations produced by GWs are a substantial source of momentum and energy in the thermosphere (Yigit et al., 2009). These fluctuations also affect radiative transfer and, ultimately, the radiative heating/cooling rates. Recently, Kutepov et al. (2007) developed a methodology to account for radiative effects of subgrid-scale GWs not captured by general circulation models (GCMs). It has been extended by Kutepov et al (2011) to account not only for wave-induced variations of temperature, but also of CO2 and atomic oxygen. It was shown that these GWs can cause additional cooling of up to 3 K/day around mesopause. A key parameter for calculating the additional cooling is the temperature variance associated with GWs, which is a subproduct of conventional GW schemes. In this study, the parameterization of Kutepov et al. (2011) has been implemented into a 3-D comprehensive GCM that incorporates the effects of unresolved GWs via the extended nonlinear scheme of Yigit et al. (2008). Simulated net effects of the additional radiative CO2 cooling on the temperature and wind in the mesosphere and lower thermosphere are presented and discussed for solstice conditions. 1. Kutepov, A. A, A. G. Feofilov, A. S. Medvedev, A. W. A. Pauldrach, and P. Hartogh (2007), Geophys. Res. Lett. 34, L24807, doi:10.1029/2007GL032392. 2. Kutepov, A. A., A. G. Feofilov, A. S. Medvedev, U. Berger, and M. Kaufmann (2011), submitted to Geophys. Res. Letts. 3. Yigit, E., A. D. Aylward, and A. S. Medvedev (2008), J. Geophys. Res., 113, D19106, doi:10.1029/2008JD010135. 4. Yigit, E., A. S. Medvedev, A. D. Aylward, P. Hartogh, and M. J. Harris (2009), J. Geophys. Res., 114, D07101, doi:10.1029/2008JD011132.

  8. USING CMAQ FOR EXPOSURE MODELING AND CHARACTERIZING THE SUB-GRID VARIABILITY FOR EXPOSURE ESTIMATES

    Science.gov (United States)

    Atmospheric processes and the associated transport and dispersion of atmospheric pollutants are known to be highly variable in time and space. Current air quality models that characterize atmospheric chemistry effects, e.g. the Community Multi-scale Air Quality (CMAQ), provide vo...

  9. A New Approach to Validate Subgrid Models in Complex High Reynolds Number Flows

    Science.gov (United States)

    1994-05-01

    data are also shown. These figures show the characteristic decrease in correla- tion when the grid is coarsened with the scale similarity model showing...passmms sogbe .iului by a Pus* dll- apWaishmalm ass" immp to bpssm do af sepia abdas h bell pufai aftg a pmiuayomd NO P) emd a smA amedidg of do @*M

  10. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  11. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  12. Numerical Simulation of Atmospheric Boundary Layer Flow Over Battlefield-scale Complex Terrain: Surface Fluxes From Resolved and Subgrid Scales

    Science.gov (United States)

    2015-07-06

    Grimmond, 2015: Proc. 9th International Conference on Urban Climate , Paris, France. Anderson W, Li Q, Bou-Zeid E, 2014: Proc. of American...represen- tative information is known about the macroscale attributes of these coher- ent motions, we have developed a sim- ple, semi -empirical model to...dust from arid landscapes on the Llano Estacado in west Texas and eastern New Mexico. • Under Review: National Science Foundation, Fluid Dynamics Program

  13. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian

    2010-07-01

    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  14. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  15. Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling

    CERN Document Server

    Harouna, S Kadri

    2016-01-01

    We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out w...

  16. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  17. Effect of aerosol subgrid variability on aerosol optical depth and cloud condensation nuclei: Implications for global aerosol modelling

    NARCIS (Netherlands)

    Weigum, Natalie; Schutgens, Nick; Stier, Philip

    2016-01-01

    A fundamental limitation of grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies in simulated aerosol climate effects

  18. Evaluation of a Sub-Grid Topographic Drag Parameterizations for Modeling Surface Wind Speed During Storms Over Complex Terrain in the Northeast U.S.

    Science.gov (United States)

    Frediani, M. E.; Hacker, J.; Anagnostou, E. N.; Hopson, T. M.

    2015-12-01

    This study aims at improving regional simulation of 10-meter wind speed by verifying PBL schemes for storms at different scales, including convective storms, blizzards, tropical storms and nor'easters over complex terrain in the northeast U.S. We verify a recently proposed sub-grid topographic drag scheme in stormy conditions and compare it with two PBL schemes (Mellor-Yamada and Yonsei University) from WRF-ARW over a region in the Northeast U.S. The scheme was designed to adjust the surface drag over regions with high subgrid-scale topographic variability. The schemes are compared using spatial, temporal, and pattern criteria against surface observations. The spatial and temporal criteria are defined by season, diurnal cycle, and topography; the pattern, is based on clusters derived using clustering analysis. Results show that the drag scheme reduces the positive bias of low wind speeds, but over-corrects the high wind speeds producing a magnitude-increasing negative bias with increasing speed. Both other schemes underestimate the most frequent low-speed mode and overestimate the high-speeds. Error characteristics of all schemes respond to seasonal and diurnal cycle changes. The Topo-wind experiment shows the best agreement with the observation quantiles in summer and fall, the best representation of the diurnal cycle in these seasons, and reduces the bias of all surface stations near the coast. In more stable conditions the Topo-wind scheme shows a larger negative bias. The cluster analysis reveals a correlation between bias and mean speed from the Mellor-Yamada and Yonsei University schemes that is not present when the drag scheme is used. When the drag scheme is used the bias correlates with wind direction; the bias increases when the meridional wind component is negative. This pattern corresponds to trajectories with more land interaction with the highest biases found in northwest circulation clusters.

  19. Birefringent dispersive FDTD subgridding scheme

    OpenAIRE

    De Deckere, B; Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2016-01-01

    A novel 2D finite difference time domain (FDTD) subgridding method is proposed, only subject to the Courant limit of the coarse grid. By making mu or epsilon inside the subgrid dispersive, unconditional stability is induced at the cost of a sparse, implicit set of update equations. By only adding dispersion along preferential directions, it is possible to dramatically reduce the rank of the matrix equation that needs to be solved.

  20. Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations

    KAUST Repository

    Iliev, Oleg P.

    2010-01-01

    We present a two-scale finite element method for solving Brinkman\\'s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy\\'s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. © 2010 Springer-Verlag Berlin Heidelberg.

  1. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    NARCIS (Netherlands)

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  2. Assessment of a flame surface density-based subgrid turbulent combustion model for nonpremixed flames of wood pyrolysis gas

    Science.gov (United States)

    Zhou, Xiangyang; Pakdee, Watit; Mahalingam, Shankar

    2004-10-01

    A flame surface density (FSD) model for closing the unresolved reaction source terms is developed and implemented in a large eddy simulation (LES) of turbulent nonpremixed flame of wood pyrolysis gas and air. In this model, the filtered reaction rate ω¯α of species α is estimated as the product of the consumption rate per unit surface area mα and the filtered FSD Σ¯. This approach is attractive since it decouples the complex chemical problem (mα) from the description of the turbulence combustion interaction (Σ¯). A simplified computational methodology is derived for filtered FSD Σ¯, which is approximated as the product of the conditional filtered gradient of mixture fraction and the filtered probability density function. Two models for flamelet consumption rate mα are proposed to consider the effect of filtered scalar dissipation rate. The performance of these models is assessed by direct numerical simulation (DNS) database where a laminar diffusion flame interacts with a decaying homogeneous and isotropic turbulent flow field. The chemistry is modeled by a four-step reduced mechanism that describes the oxidization process of gaseous fuel released from high temperature pyrolysis of wood occurring in a wildland fire. Two-dimensional (2D) and 3D LES computations based on the FSD models are conducted for the same conditions as the DNS. The comparative assessments confirm the applicability of the proposed FSD model to describe the filtered reaction rate and the time evolution of temperature and species concentration in the turbulent nonpremixed flame.

  3. Evaluation of LES models for flow over bluff body from engineering ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Results are also discussed keeping in view limitations of LES methodology of modelling for practical problems and current developments. It is concluded that a one-equation model for subgrid kinetic energy is the best choice. Keywords. Subgrid scale stress models; engineering flows; flow over bluff body. 1. Introduction.

  4. A Three-Dimensional Scale-adaptive Turbulent Kinetic Energy Model in ARW-WRF Model

    Science.gov (United States)

    Zhang, Xu; Bao, Jian-Wen; Chen, Baode

    2017-04-01

    A new three-dimensional (3D) turbulent kinetic energy (TKE) subgrid mixing model is developed to address the problem of simulating the convective boundary layer (CBL) across the terra incognita in the Advanced Research version of the Weather Research and Forecasting Model (ARW-WRF). The new model combines the horizontal and vertical subgrid turbulent mixing into a single energetically consistent framework, in contrast to the convectional one-dimensional (1D) planetary boundary layer (PBL) schemes. The transition between large-eddy simulation (LES) and mesoscale limit is accomplished in the new scale-adaptive model. A series of dry CBL and real-time simulations using the WRF model are carried out, in which the newly-developed, scale-adaptive, more general and energetically consistent TKE-based model is compared with the conventional 1D TKE-based PBL schemes for parameterizing vertical subgrid turbulent mixing against the WRF LES dataset and observations. The characteristics of the WRF-simulated results using the new and conventional schemes are compared. The importance of including the nonlocal component in the vertical buoyancy specification in the newly-developed general TKE-based scheme is illustrated. The improvements of the new scheme over convectional PBL schemes across the terra incognita can be seen in the partitioning of vertical flux profiles. Through comparing the results from the simulations against the WRF LES dataset and observations, we will show the feasibility of using the new scheme in the WRF model in the lieu of the conventional PBL parameterization schemes.

  5. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical...

  6. Operational forecasting with the subgrid technique on the Elbe Estuary

    Science.gov (United States)

    Sehili, Aissa

    2017-04-01

    Modern remote sensing technologies can deliver very detailed land surface height data that should be considered for more accurate simulations. In that case, and even if some compromise is made with regard to grid resolution of an unstructured grid, simulations still will require large grids which can be computationally very demanding. The subgrid technique, first published by Casulli (2009), is based on the idea of making use of the available detailed subgrid bathymetric information while performing computations on relatively coarse grids permitting large time steps. Consequently, accuracy and efficiency are drastically enhanced if compared to the classical linear method, where the underlying bathymetry is solely discretized by the computational grid. The algorithm guarantees rigorous mass conservation and nonnegative water depths for any time step size. Computational grid-cells are permitted to be wet, partially wet or dry and no drying threshold is needed. The subgrid technique is used in an operational forecast model for water level, current velocity, salinity and temperature of the Elbe estuary in Germany. Comparison is performed with the comparatively highly resolved classical unstructured grid model UnTRIM. The daily meteorological forcing data are delivered by the German Weather Service (DWD) using the ICON-EU model. Open boundary data are delivered by the coastal model BSHcmod of the German Federal Maritime and Hydrographic Agency (BSH). Comparison of predicted water levels between classical and subgrid model shows a very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out within less than 10 minutes on standard PC-like hardware. The model is capable of permanently delivering highly resolved temporal and spatial information on water level, current velocity, salinity and temperature for the whole estuary. The model offers also the possibility to

  7. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...

  8. A first large-scale flood inundation forecasting model

    Science.gov (United States)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.

    2013-10-01

    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  9. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-26

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  10. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  11. The effects of the sub-grid variability of soil and land cover data on agricultural droughts in Germany

    Science.gov (United States)

    Kumar, Rohini; Samaniego, Luis; Zink, Matthias

    2013-04-01

    Simulated soil moisture from land surface or water balance models is increasingly used to characterize and/or monitor the development of agricultural droughts at regional and global scales (e.g. NLADS, EDO, GLDAS). The skill of these models to accurately replicate hydrologic fluxes and state variables is strongly dependent on the quality meteorological forcings, the conceptualization of dominant processes, and the parameterization scheme used to incorporate the variability of land surface properties (e.g. soil, topography, and vegetation) at a coarser spatial resolutions (e.g. at least 4 km). The goal of this study is to analyze the effects of the sub-grid variability of soil texture and land cover properties on agricultural drought statistics such as duration, severity, and areal extent. For this purpose, a process based mesoscale hydrologic model (mHM) is used to create two sets of daily soil moisture fields over Germany at the spatial resolution of (4 × 4) km2 from 1950 to 2011. These simulations differ from each other only on the manner in which the land surface properties are accounted within the model. In the first set, soil moisture fields are obtained with the multiscale parameter regionalization (MPR) scheme (Samaniego, et. al. 2010, Kumar et. al. 2012), which explicitly takes the sub-grid variability of soil texture and land cover properties into account. In the second set, on the contrary, a single dominant soil and land cover class is used for ever grid cell at 4 km. Within each set, the propagation of the parameter uncertainty into the soil moisture simulations is also evaluated using an ensemble of 100 best global parameter sets of mHM (Samaniego, et. al. 2012). To ensure comparability, both sets of this ensemble simulations are forced with the same fields of meteorological variables (e.g., precipitation, temperature, and potential evapotranspiration). Results indicate that both sets of model simulations, with and without the sub-grid variability of

  12. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A

    2006-12-15

    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  13. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  14. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Science.gov (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  15. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Directory of Open Access Journals (Sweden)

    C. Montzka

    2017-07-01

    Full Text Available Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC and hydraulic conductivity (HCC curves are typically derived from soil texture via pedotransfer functions (PTFs. Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller–Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem–van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based

  16. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  17. Evaluation of the Transport and Diffusion of Pollutants over an Urban Area Using a Local-Scale Advection-Diffusion Model and a Sub-Grid Street Model

    DEFF Research Database (Denmark)

    Salerno, R.; Vignati, E.

    1994-01-01

    Fifth International Conference on the Development and Application of Computer Techniques to Environmental Studies, Envirosoft/94.......Fifth International Conference on the Development and Application of Computer Techniques to Environmental Studies, Envirosoft/94....

  18. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling.

    Science.gov (United States)

    Stoy, Paul C; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

  19. Incorporating channel geometric uncertainty into a regional scale flood inundation model

    Science.gov (United States)

    Neal, Jeffrey; Odoni, Nick; Trigg, Mark; Freer, Jim; Bates, Paul

    2013-04-01

    Models that simulate the dynamics of river and floodplain water surface elevations over large regions have a wide range of applications including regional scale flood risk estimation and simulating wetland inundation dynamics, while potential emerging applications include estimating river discharge from level observations as part of a data assimilation system. The river routing schemes used by global land surface models are often relatively simple in that they are based on wave speed, kinematic and diffusive physics. However, as the research on large scale river modelling matures, approaches are being developed that resemble scaled-up versions of the hydrodynamic models traditionally applied to rivers at the reach scale. These developments are not surprising given that such models can be significantly more accurate than traditional routing schemes at simulating water surface elevation. This presentation builds on the work of Neal et al. (2012) who adapted a reach scale dynamic flood inundation model for large scale application with the addition of a sub-grid parameterisation for channel flow. The scheme was shown to be numerically stable and scalable, with the aid of some simple test cases, before it was applied to an 800 km reach of the River Niger that includes the complex waterways and lakes of the Niger Inland Delta in Mali. However, the model was significantly less accurate at low to moderate flows than at high flow due, in part, to assuming that the channel geometry was rectangular. Furthermore, this made it difficult to calibrate channel parameters with water levels during typical flow conditions. This presentation will describe an extension of this sub-grid model that allows the channel shape to be defined as an exponent of width, along with a regression based approach to approximate the wetted perimeter length for the new geometry. By treating the geometry in this way uncertainty in the channel shape can be considered as a model parameter, which for the

  20. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  1. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  2. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    OpenAIRE

    Holmen, J; Hughes, T.J.R; Oberai, A.A.; Wells, G. N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike conventional LES models which act on all scales of motion. For homogeneous isotropic turbulence and turbulent channel flows, the multiscale model can outperform conventional LES formulations. An issue in...

  3. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  4. Large-scale simulation of karst processes - parameter estimation, model evaluation and quantification of uncertainty

    Science.gov (United States)

    Hartmann, A. J.

    2016-12-01

    Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for

  5. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  6. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers

    2017-01-01

    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  7. Water balance model for Kings Creek

    Science.gov (United States)

    Wood, Eric F.

    1990-01-01

    Particular attention is given to the spatial variability that affects the representation of water balance at the catchment scale in the context of macroscale water-balance modeling. Remotely sensed data are employed for parameterization, and the resulting model is developed so that subgrid spatial variability is preserved and therefore influences the grid-scale fluxes of the model. The model permits the quantitative evaluation of the surface-atmospheric interactions related to the large-scale hydrologic water balance.

  8. Physics and dynamics coupling across scales in the next generation CESM: Meeting the challenge of high resolution. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent E.

    2015-02-21

    This is a final report for a SciDAC grant supported by BER. The project implemented a novel technique for coupling small-scale dynamics and microphysics into a community climate model. The technique uses subcolumns that are sampled in Monte Carlo fashion from a distribution of subgrid variability. The resulting global simulations show several improvements over the status quo.

  9. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier

    2011-01-01

    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  10. Autonomous Operation of Hybrid Microgrid With AC and DC Subgrids

    DEFF Research Database (Denmark)

    Chiang Loh, Poh; Li, Ding; Kang Chai, Yi

    2013-01-01

    This paper investigates on power-sharing issues of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac subgrids interconnected by power electronic interfaces. The main challenge here is to manage power flows among all...... converters. Suitable control and normalization schemes are now developed for controlling them with the overall hybrid microgrid performance already verified in simulation and experiment....

  11. Simulation of Boundary-Layer Cumulus and Stratocumulus Clouds using a Cloud-Resolving Model With Low- and Third-Order Turbulence Closures

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2007-01-01

    The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.

  12. Towards filtered drag force model for non-cohesive and cohesive particle-gas flows

    Science.gov (United States)

    Ozel, Ali; Gu, Yile; Milioli, Christian C.; Kolehmainen, Jari; Sundaresan, Sankaran

    2017-10-01

    Euler-Lagrange simulations of gas-solid flows in unbounded domains have been performed to study sub-grid modeling of the filtered drag force for non-cohesive and cohesive particles. The filtered drag forces under various microstructures and flow conditions were analyzed in terms of various sub-grid quantities: the sub-grid drift velocity, which stems from the sub-grid correlation between the local fluid velocity and the local particle volume fraction, and the scalar variance of solid volume fraction, which is a measure to identify the degree of local inhomogeneity of volume fraction within a filter volume. The results show that the drift velocity and the scalar variance exert systematic effects on the filtered drag force. Effects of particle and domain sizes, gravitational accelerations, and mass loadings on the filtered drag are also studied, and it is shown that these effects can be captured by both sub-grid quantities. Additionally, the effect of cohesion force through the van der Waals interaction on the filtered drag force is investigated, and it is found that there is no significant difference on the dependence of the filtered drag coefficient of cohesive and non-cohesive particles on the sub-grid drift velocity or the scalar variance of solid volume fraction. The assessment of predictabilities of sub-grid quantities was performed by correlation coefficient analyses in a priori manner, and it is found that the drift velocity is superior. However, the drift velocity is not available in "coarse-grid" simulations and a specific closure is needed. A dynamic scale-similarity approach was used to model drift velocity but the predictability of that model is not entirely satisfactory. It is concluded that one must develop a more elaborate model for estimating the drift velocity in "coarse-grid" simulations.

  13. Enabling Large Scale Fine Resolution Flood Modeling Using SWAT and LISFLOOD-FP

    Science.gov (United States)

    Liu, Z.; Rajib, A.; Merwade, V.

    2016-12-01

    Due to computational burden, most large scale hydrologic models are not created to generate streamflow hydrographs for lower order ungauged streams. Similarly, most flood inundation mapping studies are performed at major stream reaches. As a result, it is not possible to get reliable flow estimates and flood extents for vast majority of the areas where no stream gauging stations are available. The objective of this study is to loosely couple spatially distributed hydrologic model, Soil and Water Assessment Tool (SWAT), with a 1D/2D hydrodynamic model, LISFLOOD-FP, for large scale fine resolution flood inundation modeling. The model setup is created for the 491,000 km2 drainage area of the Ohio River Basin in the United States. In the current framework, SWAT model is calibrated with historical streamflow data over the past 80 years (1935-2014) to provide streamflow time-series for more than 100,000 NHDPlus stream reaches in the basin. The post-calibration evaluation shows that the simulated daily streamflow has a Nash-Sutcliffe Efficiency in the range of 0.4-0.7 against observed records across the basin. Streamflow outputs from the calibrated SWAT are subsequently used to drive LISFLOOD-FP and routed along the streams/floodplain using the built-in subgrid solver. LISFLOOD-FP is set up for the Ohio River Basin using 90m digital elevation model, and is executed on high performance computing resources at Purdue University. The flood extents produced by LISFLOOD-FP show good agreement with observed inundation. The current modeling framework lays foundation for near real-time streamflow forecasting and prediction of time-varying flood inundation maps along the NHDPlus network.

  14. Structure and modeling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Novikov, E.A. [Univ. of California, San Diego, La Jolla, CA (United States)

    1995-12-31

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).

  15. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  16. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  17. Cloud-scale model intercomparison of chemical constituent transport in deep convection

    Directory of Open Access Journals (Sweden)

    M. C. Barth

    2007-09-01

    Full Text Available Transport and scavenging of chemical constituents in deep convection is important to understanding the composition of the troposphere and therefore chemistry-climate and air quality issues. High resolution cloud chemistry models have been shown to represent convective processing of trace gases quite well. To improve the representation of sub-grid convective transport and wet deposition in large-scale models, general characteristics, such as species mass flux, from the high resolution cloud chemistry models can be used. However, it is important to understand how these models behave when simulating the same storm. The intercomparison described here examines transport of six species. CO and O3, which are primarily transported, show good agreement among models and compare well with observations. Models that included lightning production of NOx reasonably predict NOx mixing ratios in the anvil compared with observations, but the NOx variability is much larger than that seen for CO and O3. Predicted anvil mixing ratios of the soluble species, HNO3, H2O2, and CH2O, exhibit significant differences among models, attributed to different schemes in these models of cloud processing including the role of the ice phase, the impact of cloud-modified photolysis rates on the chemistry, and the representation of the species chemical reactivity. The lack of measurements of these species in the convective outflow region does not allow us to evaluate the model results with observations.

  18. Large scale finite element solvers for the large eddy simulation of incompressible turbulent flows

    OpenAIRE

    Colomés Gené, Oriol

    2016-01-01

    In this thesis we have developed a path towards large scale Finite Element simulations of turbulent incompressible flows. We have assessed the performance of residual-based variational multiscale (VMS) methods for the large eddy simulation (LES) of turbulent incompressible flows. We consider VMS models obtained by different subgrid scale approximations which include either static or dynamic subscales, linear or nonlinear multiscale splitting, and different choices of the subscale space. W...

  19. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  20. Simple scaling model for exploding pusher targets

    Energy Technology Data Exchange (ETDEWEB)

    Storm, E.K.; Larsen, J.T.; Nuckolls, J.H.; Ahlstrom, H.G.; Manes, K.R.

    1977-11-04

    A simple model has been developed which when normalized by experiment or Lasnex calculations can be used to scale neutron yields for variations in laser input power and pulse length and target radius and wall thickness. The model also illucidates some of the physical processes occurring in this regime of laser fusion experiments. Within certain limitations on incident intensity and target geometry, the model scales with experiments and calculations to within a factor of two over six decades in neutron yield.

  1. Functional Scaling of Musculoskeletal Models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    specific to the patient. This is accomplished using optimisation methods to determine patient-specific joint positions and orientations, which minimise the least-squares error between model markers and the recorded markers from a motion capture experiment. Functional joint positions and joint axis...

  2. Modeling interactome: scale-free or geometric?

    Science.gov (United States)

    Przulj, N; Corneil, D G; Jurisica, I

    2004-12-12

    Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. One example of large and complex networks involves protein-protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free. We systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model. Supplementary information is available at http://www.cs.utoronto.ca/~juris/data/data/ppiGRG04/

  3. Improving High-resolution Weather Forecasts using the Weather Research and Forecasting (WRF) Model with Upgraded Kain-Fritsch Cumulus Scheme

    Science.gov (United States)

    High-resolution weather forecasting is affected by many aspects, i.e. model initial conditions, subgrid-scale cumulus convection and cloud microphysics schemes. Recent 12km grid studies using the Weather Research and Forecasting (WRF) model have identified the importance of inco...

  4. Application of Multiscale Parameterization Framework for the Large Scale Hydrologic Modeling

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.; Attinger, S.

    2012-12-01

    In recent decades there has been increasing interest in the development and application of large scale hydrologic models to support the management of regional water resources as well as for flood forecasting and drought monitoring. However, the reliable prediction of distributed hydrologic states (i.e. soil moisture, runoff, evapotranspiration) for large river basins (i.e. ≥ 100 000 km2) requires a robust parameterization technique that avoids scale dependent issues, reduces the over-parameterization problem, and allows the transferability of model parameters across locations (e.g. to unaguged basins). In this study, we show the ability of the recently developed Multiscale Parameter Regionalization (MPR) technique (Samaniego, et. al. 2010), integrated within a grid based hydrologic model (mHM), to address the above problems. The MPR technique explicitly accounts for sub-grid variability of basin physical characteristics by linking them to model parameters at much finer spatial resolution (e.g. 100 - 500 m) than the model pixels (> 1 km). The application of the multiscale parameterization framework was tested in four large scale river basins; two in Central Europe (the Rhine and the Elbe river basins), and two in North America (the Ohio and the Red river basins). Model runs were performed at 3h time scale on four spatial resolutions, ranging from a grid size of approximately 7 km to 50 km, for the period from 1960 to 2000. Results of the study indicated that it is possible to transfer a priori set of global parameters, estimated in a relatively small German river basin (Neckar river, 10 000 km2), to all four large river basins including the remote North American basins. The values of Nash Sutcliffe efficiency for the daily and monthly streamflow simulations were, on average, above 0.80. Similar results were obtained from simulations at four spatial resolutions (0.0625°, 0.125°, 0.25°, and 0.5°), which indicated the possibility for the cross-scale

  5. Assessment of the Suitability of a Global Hydrodynamic Model in Simulating a Regional-scale Extreme Flood at Finer Spatial Resolutions

    Science.gov (United States)

    Mateo, C. M. R.; Yamazaki, D.; Kim, H.; Champathong, A.; Oki, T.

    2015-12-01

    Global river models (GRMs) are elemental for large-scale predictions and impact analyses. However, they have limited capability in providing accurate flood information at fine resolution for practical purposes. Hyperresolution (~1km resolution) modelling is believed to improve the representation of topographical constraints, which consequently result to better predictions of surface water flows and flood inundation at regional to global scales. While numerous studies have shown that finer resolutions improve the predictions of catchment-scale floods using local-scale hydrodynamic models, the impact of finer spatial resolution on predictions of large-scale floods using GRMs is rarely examined. In this study, we assessed the suitability of a state-of-the-art hydrodynamic GRM, CaMa-Flood, in the hyperresolution simulation of a regional-scale flood. The impacts of finer spatial resolution and representation of sub-grid processes on simulating the 2011 immense flooding in Chao Phraya River Basin, Thailand was investigated. River maps ranging from 30-arcsecond (~1km) to 5-arcminute (~10km) spatial resolutions were generated from 90m resolution HydroSHEDS maps and SRTM3 DEM. Simulations were executed in each spatial resolution with the new multi-directional downstream connectivity (MDC) scheme in CaMa-Flood turned on and off. While the predictive capability of the model slightly improved with finer spatial resolution when MDC scheme is turned on, it significantly declined when MDC scheme is turned off; bias increased by 35% and NSE-coefficient decreased by 60%. These findings indicate that GRMs which assume single-downstream-grid flows are not suitable for hyperresolution modelling because of their limited capability to realistically represent floodplain connectivity. When simulating large-scale floods, MDC scheme is necessary for the following functions: provide additional storage for ovehrbank flows, enhance connectivity between floodplains which allow more realistic

  6. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing

    Science.gov (United States)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.

    2011-12-01

    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  7. Seamless cross-scale modeling with SCHISM

    Science.gov (United States)

    Zhang, Yinglong J.; Ye, Fei; Stanev, Emil V.; Grashorn, Sebastian

    2016-06-01

    We present a new 3D unstructured-grid model (SCHISM) which is an upgrade from an existing model (SELFE). The new advection scheme for the momentum equation includes an iterative smoother to reduce excess mass produced by higher-order kriging method, and a new viscosity formulation is shown to work robustly for generic unstructured grids and effectively filter out spurious modes without introducing excessive dissipation. A new higher-order implicit advection scheme for transport (TVD2) is proposed to effectively handle a wide range of Courant numbers as commonly found in typical cross-scale applications. The addition of quadrangular elements into the model, together with a recently proposed, highly flexible vertical grid system (Zhang et al., A new vertical coordinate system for a 3D unstructured-grid model. Ocean Model. 85, 2015), leads to model polymorphism that unifies 1D/2DH/2DV/3D cells in a single model grid. Results from several test cases demonstrate the model's good performance in the eddying regime, which presents greater challenges for unstructured-grid models and represents the last missing link for our cross-scale model. The model can thus be used to simulate cross-scale processes in a seamless fashion (i.e. from deep ocean into shallow depths).

  8. Site-Scale Saturated Zone Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    G. Zyvoloski

    2003-12-17

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca

  9. Regional scale effects of the aerosol cloud interaction simulated with an online coupled comprehensive chemistry model

    Directory of Open Access Journals (Sweden)

    M. Bangert

    2011-05-01

    Full Text Available We have extended the coupled mesoscale atmosphere and chemistry model COSMO-ART to account for the transformation of aerosol particles into cloud condensation nuclei and to quantify their interaction with warm cloud microphysics on the regional scale. The new model system aims to fill the gap between cloud resolving models and global scale models. It represents the very complex microscale aerosol and cloud physics as detailed as possible, whereas the continental domain size and efficient codes will allow for both studying weather and regional climate. The model system is applied in a first extended case study for Europe for a cloudy five day period in August 2005.

    The model results show that the mean cloud droplet number concentration of clouds is correlated with the structure of the terrain, and we present a terrain slope parameter TS to classify this dependency. We propose to use this relationship to parameterize the probability density function, PDF, of subgrid-scale cloud updraft velocity in the activation parameterizations of climate models.

    The simulations show that the presence of cloud condensation nuclei (CCN and clouds are closely related spatially. We find high aerosol and CCN number concentrations in the vicinity of clouds at high altitudes. The nucleation of secondary particles is enhanced above the clouds. This is caused by an efficient formation of gaseous aerosol precursors above the cloud due to more available radiation, transport of gases in clean air above the cloud, and humid conditions. Therefore the treatment of complex photochemistry is crucial in atmospheric models to simulate the distribution of CCN.

    The mean cloud droplet number concentration and droplet diameter showed a close link to the change in the aerosol. To quantify the net impact of an aerosol change on the precipitation we calculated the precipitation susceptibility β for the whole model domain over a period of two days with

  10. Surface drag effects on simulated wind fields in high-resolution atmospheric forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Kyo Sun; Lim, Jong Myoung; Ji, Young Yong [Environmental Radioactivity Assessment Team,Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shin, Hye Yum [NOAA/Geophysical Fluid Dynamics Laboratory, Princeton (United States); Hong, Jin Kyu [Yonsei University, Seoul (Korea, Republic of)

    2017-04-15

    It has been reported that the Weather Research and Forecasting (WRF) model generally shows a substantial over prediction bias at low to moderate wind speeds and winds are too geostrophic (Cheng and Steenburgh 2005), which limits the application of WRF model in the area that requires the accurate surface wind estimation such as wind-energy application, air-quality studies, and radioactive-pollutants dispersion studies. The surface drag generated by the subgrid-scale orography is represented by introducing a sink term in the momentum equation in their studies. The purpose of our study is to evaluate the simulated meteorological fields in the high-resolution WRF framework, that includes the parameterization of subgrid-scale orography developed by Mass and Ovens (2010), and enhance the forecast skill of low-level wind fields, which plays an important role in transport and dispersion of air pollutants including radioactive pollutants. The positive bias in 10-m wind speed is significantly alleviated by implementing the subgrid-scale orography parameterization, while other meteorological fields including 10-m wind direction are not changed. Increased variance of subgrid- scale orography enhances the sink of momentum and further reduces the bias in 10-m wind speed.

  11. Phenomenology of Low Quantum Gravity Scale Models

    CERN Document Server

    Benakli, Karim

    1999-01-01

    We study some phenomenological implications of models where the scale of quantum gravity effects lies much below the four-dimensional Planck scale. These models arise from M-theory vacua where either the internal space volume is large or the string coupling is very small. We provide a critical analysis of ways to unify electroweak, strong and gravitational interactions in M-theory. We discuss the relations between different scales in two M-vacua: Type I strings and Ho\\v rava--Witten supergravity models. The latter allows possibilities for an eleven-dimensional scale at TeV energies with one large dimension below separating our four-dimensional world from a hidden one. Different mechanisms for breaking supersymmetry (gravity mediated, gauge mediated and Scherk-Schwarz mechanisms) are discussed in this framework. Some phenomenological issues such as dark matter (with masses that may vary in time), origin of neutrino masses and axion scale are discussed. We suggest that these are indications that the string scal...

  12. Scaling model for symmetric star polymers

    Science.gov (United States)

    Ramachandran, Ram; Rai, Durgesh K.; Beaucage, Gregory

    2010-03-01

    Neutron scattering data from symmetric star polymers with six poly (urethane-ether) arms, chemically bonded to a C-60 molecule are fitted using a new scaling model and scattering function. The new scaling function can describe both good solvent and theta solvent conditions as well as resolve deviations in chain conformation due to steric interactions between star arms. The scaling model quantifies the distinction between invariant topological features for this star polymer and chain tortuosity which changes with goodness of solvent and steric interaction. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009); Rai DK et al. Europhys. Lett., (Submitted 10/2009).

  13. Landscape modelling at Regional to Continental scales

    Science.gov (United States)

    Kirkby, M. J.

    Most work on simulating landscape evolution has been focused at scales of about 1 Ha, there are still limitations, particularly in understanding the links between hillslope process rates and climate, soils and channel initiation. However, the need for integration with GCM outputs and with Continental Geosystems now imposes an urgent need for scaling up to Regional and Continental scales. This is reinforced by a need to incorporate estimates of soil erosion and desertification rates into national and supra-national policy. Relevant time-scales range from decadal to geological. Approaches at these regional to continental scales are critical to a fuller collaboration between geomorphologists and others interested in Continental Geosystems. Two approaches to the problem of scaling up are presented here for discussion. The first (MEDRUSH) is to embed representative hillslope flow strips into sub-catchments within a larger catchment of up to 5,000 km2. The second is to link one-dimensional models of SVAT type within DEMs at up to global scales (CSEP/SEDWEB). The MEDRUSH model is being developed as part of the EU Desertification Programme (MEDALUS project), primarily for semi-natural vegetation in southern Europe over time spans of up to 100 years. Catchments of up to 2500 km2 are divided into 50-200 sub-catchments on the basis of flow paths derived from DEMs with a horizontal resolution of 50 m or better. Within each sub-catchment a representative flow strip is selected and Hydrology, Sediment Transport and Vegetation change are simulated in detail for the flow strip, using a 1 hour time step. Changes within each flow strip are transferred back to the appropriate sub-catchment and flows of water and sediment are then routed through the channel network, generating changes in flood plain morphology.

  14. Towards dynamic genome-scale models.

    Science.gov (United States)

    Gilbert, David; Heiner, Monika; Jayaweera, Yasoda; Rohr, Christian

    2017-10-13

    The analysis of the dynamic behaviour of genome-scale models of metabolism (GEMs) currently presents considerable challenges because of the difficulties of simulating such large and complex networks. Bacterial GEMs can comprise about 5000 reactions and metabolites, and encode a huge variety of growth conditions; such models cannot be used without sophisticated tool support. This article is intended to aid modellers, both specialist and non-specialist in computerized methods, to identify and apply a suitable combination of tools for the dynamic behaviour analysis of large-scale metabolic designs. We describe a methodology and related workflow based on publicly available tools to profile and analyse whole-genome-scale biochemical models. We use an efficient approximative stochastic simulation method to overcome problems associated with the dynamic simulation of GEMs. In addition, we apply simulative model checking using temporal logic property libraries, clustering and data analysis, over time series of reaction rates and metabolite concentrations. We extend this to consider the evolution of reaction-oriented properties of subnets over time, including dead subnets and functional subsystems. This enables the generation of abstract views of the behaviour of these models, which can be large-up to whole genome in size-and therefore impractical to analyse informally by eye. We demonstrate our methodology by applying it to a reduced model of the whole-genome metabolism of Escherichia coli K-12 under different growth conditions. The overall context of our work is in the area of model-based design methods for metabolic engineering and synthetic biology. © The Author 2017. Published by Oxford University Press.

  15. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  16. Scale Anchoring with the Rasch Model.

    Science.gov (United States)

    Wyse, Adam E

    Scale anchoring is a method to provide additional meaning to particular scores at different points along a score scale by identifying representative items associated with the particular scores. These items are then analyzed to write statements of what types of performance can be expected of a person with the particular scores to help test takers and other stakeholders better understand what it means to achieve the different scores. This article provides simple formulas that can be used to identify possible items to serve as scale anchors with the Rasch model. Specific attention is given to practical considerations and challenges that may be encountered when applying the formulas in different contexts. An illustrative example using data from a medical imaging certification program demonstrates how the formulas can be applied in practice.

  17. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...... of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome....... Cancer specific models of metabolism have also been generated by reducing the number of reactions in the generic model based on high throughput expression data, e.g. transcriptomics and proteomics. Targets for drugs and bio markers for diagnostics have been identified using these models. They have also...

  18. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    Science.gov (United States)

    Krueger, S. K.; Belochitski, A.; Moorthi, S.; Bogenschutz, P.; Pincus, R.

    2015-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC code was adopted for a global model environment from its origins in a cloud resolving model, and incorporated into NCEP GFS. SHOC was first tested in a non-interactive mode, a configuration where SHOC receives inputs from the host model, but its outputs are not returned to the GFS. In this configuration: a) SGS TKE values produced by GFS SHOC are consistent with those produced by SHOC in a CRM, b) SGS TKE in GFS SHOC exhibits a well defined diurnal cycle, c) there's enhanced boundary layer turbulence in the subtropical stratocumulus and tropical transition-to-cumulus areas d) buoyancy flux diagnosed from the assumed PDF is consistent with independently calculated Brunt-Vaisala frequency in identifying stable and unstable regions.Next, SHOC was coupled to GFS, namely turbulent diffusion coefficients computed by SHOC are now used in place of those currently produced by the GFS boundary layer and shallow convection schemes (Han and Pan, 2011), as well as condensation and cloud fraction diagnosed from the SGS PDF replace those calculated in the current large-scale cloudines scheme (Zhao and Carr, 1997). Ongoing activities consist of debugging the fully coupled GFS/SHOC.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate

  19. Pore-Scale Model for Microbial Growth

    Science.gov (United States)

    Tartakovsky, G.; Tartakovsky, A. M.; Scheibe, T. D.

    2011-12-01

    A lagrangian particle model based on smoothed particle hydrodynamics (SPH) is used to simulate pore-scale flow, reactive transport and biomass growth which is controlled by the mixing of an electron donor and acceptor, in a microfluidic porous cell. The experimental results described in Ch. Zhang et al "Effects of pore-scale heterogeneity and transverse mixing on bacterial growth in porous media" were used for this study. The model represents the homogeneous pore structure of a uniform array of cylindrical posts with microbes uniformly distributed on the grain surfaces. Each one of the two solutes (electron donor and electron acceptor) enters the domain unmixed through separate inlets. In the model, pair-wise particle-particle interactions are used to simulate interactions within the biomass, and both biomass-fluid and biomass-soil grain interactions. The biomass growth rate is described by double Monod kinetics. For the set of parameters used in the simulations the model predicts that: 1) biomass grows in the shape of bridges connecting soil grains and oriented in the direction of flow so as to minimize resistance to the fluid flow; and 2) the biomass growth occurs only in the mixing zone. Using parameters available in the literature, the biomass growth model agrees qualitatively with the experimental results. In order to achieve quantitative agreement, model calibration is required.

  20. Anomalous scalings in differential models of turbulence

    CERN Document Server

    Thalabard, Simon; Galtier, Sebastien; Sergey, Medvedev

    2015-01-01

    Differential models for hydrodynamic, passive-scalar and wave turbulence given by nonlinear first- and second-order evolution equations for the energy spectrum in the $k$-space were analysed. Both types of models predict formation an anomalous transient power-law spectra. The second-order models were analysed in terms of self-similar solutions of the second kind, and a phenomenological formula for the anomalous spectrum exponent was constructed using numerics for a broad range of parameters covering all known physical examples. The first-order models were examined analytically, including finding an analytical prediction for the anomalous exponent of the transient spectrum and description of formation of the Kolmogorov-type spectrum as a reflection wave from the dissipative scale back into the inertial range. The latter behaviour was linked to pre-shock/shock singularities similar to the ones arising in the Burgers equation. Existence of the transient anomalous scaling and the reflection-wave scenario are argu...

  1. Lattice Boltzmann Large Eddy Simulation Model of MHD

    CERN Document Server

    Flint, Christopher

    2016-01-01

    The work of Ansumali \\textit{et al.}\\cite{Ansumali} is extended to Two Dimensional Magnetohydrodynamic (MHD) turbulence in which energy is cascaded to small spatial scales and thus requires subgrid modeling. Applying large eddy simulation (LES) modeling of the macroscopic fluid equations results in the need to apply ad-hoc closure schemes. LES is applied to a suitable mesoscopic lattice Boltzmann representation from which one can recover the MHD equations in the long wavelength, long time scale Chapman-Enskog limit (i.e., the Knudsen limit). Thus on first performing filter width expansions on the lattice Boltzmann equations followed by the standard small Knudsen expansion on the filtered lattice Boltzmann system results in a closed set of MHD turbulence equations provided we enforce the physical constraint that the subgrid effects first enter the dynamics at the transport time scales. In particular, a multi-time relaxation collision operator is considered for the density distribution function and a single rel...

  2. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    behavior and the trapped free energy in the material, in addition to the plastic behavior in terms of the anisotropic development of the yield surface. It is shown that a generalization of Hill’s anisotropic yield criterion can be used to model the Bauschinger effect, in addition to the pressure and size...... is analyzed using a Representative Volume Element (RVE), while the homogenized data are saved and used as an input to the macro scale. The dependence of fiber size is analyzed using a higher order plasticity theory, where the free energy is stored due to plastic strain gradients at the micron scale. Hill...... dependence. The development of the macroscopic yield surface upon deformation is investigated in terms of the anisotropic hardening (expansion of the yield surface) and kinematic hardening (translation of the yield surface). The kinematic hardening law is based on trapped free energy in the material due...

  3. Exploitation of parallelism in climate models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.

    2001-02-05

    This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.

  4. Multi-scale modelling and dynamics

    Science.gov (United States)

    Müller-Plathe, Florian

    Moving from a fine-grained particle model to one of lower resolution leads, with few exceptions, to an acceleration of molecular mobility, higher diffusion coefficient, lower viscosities and more. On top of that, the level of acceleration is often different for different dynamical processes as well as for different state points. While the reasons are often understood, the fact that coarse-graining almost necessarily introduces unpredictable acceleration of the molecular dynamics severely limits its usefulness as a predictive tool. There are several attempts under way to remedy these shortcoming of coarse-grained models. On the one hand, we follow bottom-up approaches. They attempt already when the coarse-graining scheme is conceived to estimate their impact on the dynamics. This is done by excess-entropy scaling. On the other hand, we also pursue a top-down development. Here we start with a very coarse-grained model (dissipative particle dynamics) which in its native form produces qualitatively wrong polymer dynamics, as its molecules cannot entangle. This model is modified by additional temporary bonds, so-called slip springs, to repair this defect. As a result, polymer melts and solutions described by the slip-spring DPD model show correct dynamical behaviour. Read more: ``Excess entropy scaling for the segmental and global dynamics of polyethylene melts'', E. Voyiatzis, F. Müller-Plathe, and M.C. Böhm, Phys. Chem. Chem. Phys. 16, 24301-24311 (2014). [DOI: 10.1039/C4CP03559C] ``Recovering the Reptation Dynamics of Polymer Melts in Dissipative Particle Dynamics Simulations via Slip-Springs'', M. Langeloth, Y. Masubuchi, M. C. Böhm, and F. Müller-Plathe, J. Chem. Phys. 138, 104907 (2013). [DOI: 10.1063/1.4794156].

  5. Sub-Grid-Scale Description of Turbulent Magnetic Reconnection in Magnetohydrodynamics

    CERN Document Server

    Widmer, Fabien; Yokoi, Nobumitsu

    2015-01-01

    Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could permit this instead of the too rare binary collisions. We investigated the influence of turbulence on the reconnection rate in the framework of a single fluid compressible MHD approach. The goal is to find out, whether unresolved, sub-grid for MHD simulations, turbulence can enhance the reconnection process in high Reynolds number astrophysical plasma. We solve, simultaneously with the grid-scale MHD equations, evolution equations for the sub-grid turbulent energy and cross helicity according to Yokoi's model (Yokoi (2013)) where turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. Simulations of Harris and force free sheets confirm the results of Higashimori et al. (2013) and new results are obtained about the dependence on resistivity for large Reynolds number as well as guide field effects. The amount of energy transferred f...

  6. Modeling photosynthesis in sea ice-covered waters

    Science.gov (United States)

    Long, Matthew C.; Lindsay, Keith; Holland, Marika M.

    2015-09-01

    The lower trophic levels of marine ecosystems play a critical role in the Earth System mediating fluxes of carbon to the ocean interior. Many of the functional relationships describing biological rate processes, such as primary productivity, in marine ecosystem models are nonlinear functions of environmental state variables. As a result of nonlinearity, rate processes computed from mean fields at coarse resolution will differ from similar computations that incorporate small-scale heterogeneity. Here we examine how subgrid-scale variability in sea ice thickness impacts simulated net primary productivity (NPP) in a 1°×1° configuration of the Community Earth System Model (CESM). CESM simulates a subgrid-scale ice thickness distribution and computes shortwave penetration independently for each ice thickness category. However, the default model formulation uses grid-cell mean irradiance to compute NPP. We demonstrate that accounting for subgrid-scale shortwave heterogeneity by computing light limitation terms under each ice category then averaging the result is a more accurate invocation of the photosynthesis equations. Moreover, this change delays seasonal bloom onset and increases interannual variability in NPP in the sea ice zone in the model. The new treatment reduces annual production by about 32% in the Arctic and 19% in the Antarctic. Our results highlight the importance of considering heterogeneity in physical fields when integrating nonlinear biogeochemical reactions.

  7. Modelling landscape evolution at the flume scale

    Science.gov (United States)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  8. Representing glaciers in a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Kotlarski, Sven [Max Planck Institute for Meteorology, Hamburg (Germany); ETH Zurich, Institute for Atmospheric and Climate Science, Zurich (Switzerland); Jacob, Daniela; Podzun, Ralf [Max Planck Institute for Meteorology, Hamburg (Germany); Paul, Frank [University of Zurich, Department of Geography, Zurich (Switzerland)

    2010-01-15

    A glacier parameterization scheme has been developed and implemented into the regional climate model REMO. The new scheme interactively simulates the mass balance as well as changes of the areal extent of glaciers on a subgrid scale. The temporal evolution and the general magnitude of the simulated glacier mass balance in the European Alps are in good accordance with observations for the period 1958-1980, but the strong mass loss towards the end of the twentieth century is systematically underestimated. The simulated decrease of glacier area in the Alps between 1958 and 2003 ranges from -17.1 to -23.6%. The results indicate that observed glacier mass balances can be approximately reproduced within a regional climate model based on simplified concepts of glacier-climate interaction. However, realistic results can only be achieved by explicitly accounting for the subgrid variability of atmospheric parameters within a climate model grid box. (orig.)

  9. Why PUB needs scaling

    Science.gov (United States)

    Lovejoy, S.; Schertzer, D.; Hubert, P.; Mouchel, J. M.; Benjoudhi, H.; Tchigurinskaya, Y.; Gaume, E.; Vesseire, J.-M.

    2003-04-01

    Hydrological fields display an extreme variability over a wide range of space-time scales. This variability is beyond the scope of classical mathematical and modeling methods which are forced to combine homogeneity assumptions with scale truncations and subgrid parameterizations. These ad hoc procedures nevertheless lead to complex numerical codes: they are difficult to transfer from one basin to another one, or even to verify with data at a different scale. Tuning the model parameters is hazardous: “predictions” are often reduced to fitting existing observations and are in any case essentially limited to the narrow range of space-time scales over which the parameters have been estimated. In contrast, in recent scaling approaches heterogeneity and uncertainty at all scales are no longer obstacles. The variability is viewed as a consequence of a scale symmetry which must first be elucidated and then exploited: small scale homogeneity assumptions are replaced by small scale heterogeneity assumptions which are verified from data covering wide ranges of scale. PUB provides an unprecedented opportunity not only to test scaling concepts and techniques, but also to development them further. Indeed, PUB can be restated in the following manner: given a partial knowledge on the input (atmospheric states, dynamics and fluxes) and of the media (basin) over a given range of scales, what can we predict for the output (steamflow and water quality) and over which range of scales? We illustrate this state of the art with examples taken from various projects involving precipitation and stream flow collectively spanning the range of scales from centimeters to planetary scales in space, from seconds to tens of years in time.

  10. Scaling in a Multispecies Network Model Ecosystem

    CERN Document Server

    Solé, R V; McKane, A; Sole, Ricard V.; Alonso, David; Kane, Alan Mc

    1999-01-01

    A new model ecosystem consisting of many interacting species is introduced. The species are connected through a random matrix with a given connectivity. It is shown that the system is organized close to a boundary of marginal stability in such a way that fluctuations follow power law distributions both in species abundance and their lifetimes for some slow-driving (immigration) regime. The connectivity and the number of species are linked through a scaling relation which is the one observed in real ecosystems. These results suggest that the basic macroscopic features of real, species-rich ecologies might be linked with a critical state. A natural link between lognormal and power law distributions of species abundances is suggested.

  11. Wind Farm parametrization in the mesoscale model WRF

    DEFF Research Database (Denmark)

    Volker, Patrick; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    significantly lower computational costs compared to high resolution models. Due to the fact that its typical horizontal grid spacing is on the order of 2km, the energy extracted by the turbine, as well as the wake development inside the turbine- containing grid cells, are not described explicitly......, but are parametrized as another sub-grid scale process. In order to appropriately capture the wind farm wake recovery and its direction, two properties are important, among others, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual...... turbines produce a thrust dependent on the background velocity. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, which is responsible for the expansion, is taken into account. Furthermore, since the model horizontal distance is several times...

  12. Advancement of Global-scale River Hydrodynamics Modelling and Its Potential Applications to Earth System Models

    Science.gov (United States)

    Yamazaki, D.

    2015-12-01

    Global river routine models have been developed for representing freshwater discharge from land to ocean in Earth System Models. At the beginning, global river models had simulated river discharge along a prescribed river network map by using a linear-reservoir assumption. Recently, in parallel with advancement of remote sensing and computational powers, many advanced global river models have started to represent floodplain inundation assuming sub-grid floodplain topography. Some of them further pursue physically-appropriate representation of river and floodplain dynamics, and succeeded to utilize "hydrodynamic flow equations" to realistically simulate channel/floodplain and upstream/downstream interactions. State-of-the-art global river hydrodynamic models can well reproduce flood stage (e.g. inundated areas and water levels) in addition to river discharge. Flood stage simulation by global river models can be potentially coupled with land surface processes in Earth System Models. For example, evaporation from inundated water area is not negligible for land-atmosphere interactions in arid areas (such as the Niger River). Surface water level and ground water level are correlated each other in flat topography, and this interaction could dominate wetting and drying of many small lakes in flatland and could also affect biogeochemical processes in these lakes. These land/surface water interactions had not been implemented in Earth System Models but they have potential impact on the global climate and carbon cycle. In the AGU presentation, recent advancements of global river hydrodynamic modelling, including super-high resolution river topography datasets, will be introduces. The potential applications of river and surface water modules within Earth System Models will be also discussed.

  13. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2008-05-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  14. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2007-11-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  15. Global-scale modeling of groundwater recharge

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-05-01

    Full Text Available Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps. The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961–1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3

  16. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    -liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl...... to 1000 bar. The solubility of CO2 in pure water, and the solubility of CO2 in solutions of different salts (NaCl and Na2SO4) have also been correlated. Results for the binary systems MCO3-H2O, and CO2-H2O; the ternary systems MCO3-CO2-H2O, CO2-NaCl-H2O, and CO2-Na2SO4-H2O; and the quaternary system CO2....... Chapter 2 is focused on thermodynamics of the systems studied and on the calculation of vapour-liquid, solid-liquid, and speciation equilibria. The effects of both temperature and pressure on the solubility are addressed, and explanation of the model calculations is also given. Chapter 3 presents...

  17. Multi-scale models for cell adhesion

    Science.gov (United States)

    Wu, Yinghao; Chen, Jiawen; Xie, Zhong-Ru

    2014-03-01

    The interactions of membrane receptors during cell adhesion play pivotal roles in tissue morphogenesis during development. Our lab focuses on developing multi-scale models to decompose the mechanical and chemical complexity in cell adhesion. Recent experimental evidences show that clustering is a generic process for cell adhesive receptors. However, the physical basis of such receptor clustering is not understood. We introduced the effect of molecular flexibility to evaluate the dynamics of receptors. By delivering new theory to quantify the changes of binding free energy in different cellular environments, we revealed that restriction of molecular flexibility upon binding of membrane receptors from apposing cell surfaces (trans) causes large entropy loss, which dramatically increases their lateral interactions (cis). This provides a new molecular mechanism to initialize receptor clustering on the cell-cell interface. By using the subcellular simulations, we further found that clustering is a cooperative process requiring both trans and cis interactions. The detailed binding constants during these processes are calculated and compared with experimental data from our collaborator's lab.

  18. Modeling cancer metabolism on a genome scale

    Science.gov (United States)

    Yizhak, Keren; Chaneton, Barbara; Gottlieb, Eyal; Ruppin, Eytan

    2015-01-01

    Cancer cells have fundamentally altered cellular metabolism that is associated with their tumorigenicity and malignancy. In addition to the widely studied Warburg effect, several new key metabolic alterations in cancer have been established over the last decade, leading to the recognition that altered tumor metabolism is one of the hallmarks of cancer. Deciphering the full scope and functional implications of the dysregulated metabolism in cancer requires both the advancement of a variety of omics measurements and the advancement of computational approaches for the analysis and contextualization of the accumulated data. Encouragingly, while the metabolic network is highly interconnected and complex, it is at the same time probably the best characterized cellular network. Following, this review discusses the challenges that genome-scale modeling of cancer metabolism has been facing. We survey several recent studies demonstrating the first strides that have been done, testifying to the value of this approach in portraying a network-level view of the cancer metabolism and in identifying novel drug targets and biomarkers. Finally, we outline a few new steps that may further advance this field. PMID:26130389

  19. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    Science.gov (United States)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over

  20. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  1. Modeling the Effects of Aircraft Emissions on Atmospheric Photochemistry Using Layered Plume Dynamics

    Science.gov (United States)

    Cameron, M. A.; Jacobson, M. Z.; Naiman, A. D.; Lele, S. K.

    2012-12-01

    Aviation is an expanding industry, experiencing continued growth and playing an increasingly noticed role in upper tropospheric/lower stratospheric composition. Nitrogen oxides and other gas-phase emissions from aircraft react to affect ozone photochemistry. This research investigates the effects of treating aircraft gas-phase chemistry within an expanding layered plume versus at the grid scale. SMVGEAR II, a sparse-matrix, vectorized Gear-type solver for ordinary differential equations, is used to solve chemical equations at both the grid scale and subgrid scale. A Subgrid Plume Model (SPM) is used to advance the expanding plume, accounting for wind shear and diffusion. Simulations suggest that using a layered plume approach results in noticeably different final NOx concentrations, demonstrating the importance of these plume dynamics in predicting the effects of aircraft on ozone concentrations. Results showing the effects of a layered plume, single plume, and no plume on ozone after several hours will be presented.

  2. Upscaling a catchment-scale ecohydrology model for regional-scale earth system modeling

    Science.gov (United States)

    Adam, J. C.; Tague, C.; Liu, M.; Garcia, E.; Choate, J.; Mullis, T.; Hull, R.; Vaughan, J. K.; Kalyanaraman, A.; Nguyen, T.

    2014-12-01

    With a focus on the U.S. Pacific Northwest (PNW), BioEarth is an Earth System Model (EaSM) currently in development that explores the interactions between coupled C:N:H2O dynamics and resource management actions at the regional scale. Capturing coupled biogeochemical processes within EaSMs like BioEarth is important for exploring the response of the land surface to changes in climate and resource management actions; information that is important for shaping decisions that promote sustainable use of our natural resources. However, many EaSM frameworks do not adequately represent landscape-scale ( 10 km) are necessitated by computational limitations. Spatial heterogeneity in a landscape arises due to spatial differences in underlying soil and vegetation properties that control moisture, energy and nutrient fluxes; as well as differences that arise due to spatially-organized connections that may drive an ecohydrologic response by the land surface. While many land surface models used in EaSM frameworks capture the first type of heterogeneity, few account for the influence of lateral connectivity on land surface processes. This type of connectivity can be important when considering soil moisture and nutrient redistribution. The RHESSys model is utilized by BioEarth to enable a "bottom-up" approach that preserves fine spatial-scale sensitivities and lateral connectivity that may be important for coupled C:N:H2O dynamics over larger scales. RHESSys is a distributed eco-hydrologic model that was originally developed to run at relatively fine but computationally intensive spatial resolutions over small catchments. The objective of this presentation is to describe two developments to enable implementation of RHESSys over the PNW. 1) RHESSys is being adapted for BioEarth to allow for moderately coarser resolutions and the flexibility to capture both types of heterogeneity at biome-specific spatial scales. 2) A Kepler workflow is utilized to enable RHESSys implementation over

  3. Stochastic Climate Theory and Modelling

    CERN Document Server

    Franzke, Christian L E; Berner, Judith; Williams, Paul D; Lucarini, Valerio

    2014-01-01

    Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochast...

  4. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...

  5. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    Science.gov (United States)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  6. A New Method of Building Scale-Model Houses

    Science.gov (United States)

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  7. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...... the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid....

  8. Gauge coupling unification in a classically scale invariant model

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki; Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Takahashi, Ryo [Graduate School of Science, Tohoku University,Sendai, 980-8578 (Japan); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University,Sapporo 060-0810 (Japan)

    2016-02-08

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3){sub C} with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.

  9. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Science.gov (United States)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; Morton, Don; Hinzman, Larry; Nijssen, Bart

    2017-09-01

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties - including the distribution of permafrost and vegetation cover heterogeneity - are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to

  10. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    Full Text Available Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW in Interior Alaska: one nearly permafrost-free (LowP sub-basin and one permafrost-dominated (HighP sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC mesoscale hydrological model to simulate runoff, evapotranspiration (ET, and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  11. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2011-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  12. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2010-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  13. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    Science.gov (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  14. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  15. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data. This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  16. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  17. On nano-scale hydrodynamic lubrication models

    Science.gov (United States)

    Buscaglia, Gustavo; Ciuperca, Ionel S.; Jai, Mohammed

    2005-06-01

    Current magnetic head sliders and other micromechanisms involve gas lubrication flows with gap thicknesses in the nanometer range and stepped shapes fabricated by lithographic methods. In mechanical simulations, rarefaction effects are accounted for by models that propose Poiseuille flow factors which exhibit singularities as the pressure tends to zero or +∞. In this Note we show that these models are indeed mathematically well-posed, even in the case of discontinuous gap thickness functions. Our results cover popular models that were not previously analyzed in the literature, such as the Fukui-Kaneko model and the second-order model, among others. To cite this article: G. Buscaglia et al., C. R. Mecanique 333 (2005).

  18. Optimal Scaling of Interaction Effects in Generalized Linear Models

    Science.gov (United States)

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  19. Multiple-scale turbulence model in confined swirling jet predictions

    Science.gov (United States)

    Chen, C. P.

    1986-01-01

    A recently developed multiple-scale turbulence model which attempts to circumvent the deficiencies of earlier models by taking nonequilibrium spectral energy transfer into account is presented. The model's validity is tested by predicting the confined swirling coaxial jet flow in a sudden expansion. It is noted that, in order to account for anisotropic turbulence, a full Reynolds stress model is required.

  20. Continental scale modelling of geomagnetically induced currents

    OpenAIRE

    Sakharov Yaroslav; Prácser Ernö; Ádám Antal; Wik Magnus; Pirjola Risto; Viljanen Ari; Katkalov Juri

    2012-01-01

    The EURISGIC project (European Risk from Geomagnetically Induced Currents) aims at deriving statistics of geomagnetically induced currents (GIC) in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the p...

  1. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... parameters to a specific subject and compare the results to a simpler approach based on linear, segment-wise scaling. By incorporating data from functional and standing reference trials, the new scaling approaches reduce the model sensitivity to assumed model marker positions. For validation, we applied all....... The presented methods solve part of this problem and rely less on manual identification of anatomical landmarks in the model. The work represents a step towards a more consistent methodology in musculoskeletal modelling....

  3. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations...... and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production....

  4. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...... of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical...... indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between one and two...

  5. Continental scale modelling of geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    Sakharov Yaroslav

    2012-09-01

    Full Text Available The EURISGIC project (European Risk from Geomagnetically Induced Currents aims at deriving statistics of geomagnetically induced currents (GIC in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the previous technique. We derive the exact formulation to calculate geovoltages on the surface of a sphere and show its practical approximation in a fast vectorised form. Using the model of the old Finnish power grid and a much larger prototype model of European high-voltage power grids, we validate the new technique by comparing it to the old one. We also compare model results to measured data in the following cases: geoelectric field at the Nagycenk observatory, Hungary; GIC at a Russian transformer; GIC along the Finnish natural gas pipeline. In all cases, the new method works reasonably well.

  6. Special Issue: Very large eddy simulation. Issue Edited by Dimitris Drikakis.Copyright © 2002 John Wiley & Sons, Ltd.Save Title to My ProfileSet E-Mail Alert javascript:mailTool(96015556, '', '');" id="email" alt="E-Mail" title="E-Mail This Page" />javascript:print();" id="print" alt="Print" title="Print This Page" /> Previous Issue | Next Issue > Full Issue Listing-->Volume 39, Issue 9, Pages 763-864(30 July 2002)Research ArticleEmbedded turbulence model in numerical methods for hyperbolic conservation laws

    Science.gov (United States)

    Drikakis, D.

    2002-07-01

    The paper describes the use of numerical methods for hyperbolic conservation laws as an embedded turbulence modelling approach. Different Godunov-type schemes are utilized in computations of Burgers' turbulence and a two-dimensional mixing layer. The schemes include a total variation diminishing, characteristic-based scheme which is developed in this paper using the flux limiter approach. The embedded turbulence modelling property of the above methods is demonstrated through coarsely resolved large eddy simulations with and without subgrid scale models. Copyright

  7. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  8. Vertical Velocities in Cumulus Convection: Implications for Climate and Prospects for Realistic Simulation at Cloud Scale

    Science.gov (United States)

    Donner, Leo

    2014-05-01

    Cumulus mass fluxes are essential controls on the interactions between cumulus convection and large-scale flows. Cumulus parameterizations have generally been built around them, and these parameterizations are basic components of climate models. Several important questions in climate science depend also on cumulus vertical velocities. Interactions between aerosols and convection comprise a prominent example, and scale-aware cumulus parameterizations that require explicit information about cumulus areas are another. Basic progress on these problems requires realistic characterization of cumulus vertical velocities from observations and models. Recent deployments of dual-Doppler radars are providing unprecedented observations, which can be compared against cloud-resolving models (CRMs). The CRMs can subsequently be analyzed to develop and evaluate parameterizations of vertical velocities in climate models. Vertical velocities from several cloud models will be compared against observations in this presentation. CRM vertical velocities will be found to depend strongly on model resolution and treatment of sub-grid turbulence and microphysics. Although many current state-of-science CRMs do not simulate vertical velocities well, recent experiments with these models suggest that with appropriate treatments of sub-grid turbulence and microphysics robustly realistic modeling of cumulus vertical velocities is possible.

  9. Flavor Gauge Models Below the Fermi Scale

    Energy Technology Data Exchange (ETDEWEB)

    Babu, K. S. [Oklahoma State U.; Friedland, A. [SLAC; Machado, P. A.N. [Madrid, IFT; Mocioiu, I. [Penn State U.

    2017-05-04

    The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $D^+$ and Upsilon decays, $D-\\bar{D}^0$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $g_X$ in the range $(10^{-2} - 10^{-4})$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.

  10. Towards Cloud-Resolving European-Scale Climate Simulations using a fully GPU-enabled Prototype of the COSMO Regional Model

    Science.gov (United States)

    Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph

    2014-05-01

    The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs

  11. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  12. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  13. Modeling Human Behavior at a Large Scale

    Science.gov (United States)

    2012-01-01

    impacts its recognition performance for both activities. The example we just gave illustrates one type of freeing false positives. The hallucinated freeings... vision have worked on the problem of recognizing events in videos of sporting events, such as impressive recent work on learning models of baseball plays...data can only be disambiguated by considering arbitrarily long temporal sequences. In general, however, both our work 65 and that in machine vision

  14. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguíluz, Víctor M.; Hernández-García, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  15. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center...... grids and seamlessly providing realistic mesoscale weather forcing to drive a large eddy simulation (LES) model within the WRF framework. The WRF based RTFDDA LES modeling capability is referred to as WRF–RTFDDA–LES. In this study, WRF–RTFDDA–LES is employed to simulate real weather in a major wind farm...... located in northern Colorado with six nested domains. The grid sizes of the nested domains are 30, 10, 3.3, 1.1, 0.370 and 0.123 km, respectively. The model results are compared with wind–farm anemometer measurements and are found to capture many intra-farm wind features and microscale flows. Additional...

  16. Fractal Modeling and Scaling in Natural Systems - Editorial

    Science.gov (United States)

    The special issue of Ecological complexity journal on Fractal Modeling and Scaling in Natural Systems contains representative examples of the status and evolution of data-driven research into fractals and scaling in complex natural systems. The editorial discusses contributions to understanding rela...

  17. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    ... time scales involved in determining macroscopic properties has been attempted by several workers with varying degrees of success. This paper will review the recently developed quasicontinuum method which is an attempt to bridge the length scales in a single seamless model with the aid of the finite element method.

  18. Scaling Properties of a Hybrid Fermi-Ulam-Bouncer Model

    Directory of Open Access Journals (Sweden)

    Diego F. M. Oliveira

    2009-01-01

    under the framework of scaling description. The model is described by using a two-dimensional nonlinear area preserving mapping. Our results show that the chaotic regime below the lowest energy invariant spanning curve is scaling invariant and the obtained critical exponents are used to find a universal plot for the second momenta of the average velocity.

  19. Ares I Scale Model Acoustic Test Lift-Off Acoustics

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janie D.

    2011-01-01

    The lift-off acoustic (LOA) environment is an important design factor for any launch vehicle. For the Ares I vehicle, the LOA environments were derived by scaling flight data from other launch vehicles. The Ares I LOA predicted environments are compared to the Ares I Scale Model Acoustic Test (ASMAT) preliminary results.

  20. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  1. Visualization and modeling of smoke transport over landscape scales

    Science.gov (United States)

    Glenn P. Forney; William Mell

    2007-01-01

    Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...

  2. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  3. Meso-scale modeling of a forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Arnqvist, Johan; Bergström, Hans

    2014-01-01

    Meso-scale models are increasingly used for estimating wind resources for wind turbine siting. In this study, we investigate how the Weather Research and Forecasting (WRF) model performs using standard model settings in two different planetary boundary layer schemes for a forested landscape and how...

  4. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  5. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  6. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    The overall aim of this project is to couple a non-hydrostatic atmospheric model (ARPS) to an integrated hydrological model (MIKE SHE) to investigate atmospheric and hydrological feedbacks at different scales. To ensure a consistent coupling a new land-surface component based on a modified...... Shuttleworth-Wallace scheme was implemented in MIKE SHE. To validate the new land-surface component at different scales, the hydrological model was applied to an intensively monitored 10 km2 agricultural area in Denmark with a resolution of 40 meter. The model is forced with half-hourly metorological...... observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  7. Multi-Scale Computational Models for Electrical Brain Stimulation

    Science.gov (United States)

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  8. Predictions of a model of weak scale from dynamical breaking of scale invariance

    Directory of Open Access Journals (Sweden)

    Giulio Maria Pelaggi

    2015-04-01

    Full Text Available We consider a model where the weak and the DM scale arise at one loop from the Coleman–Weinberg mechanism. We perform a precision computation of the model predictions for the production cross section of a new Higgs-like scalar and for the direct-detection cross section of the DM particle candidate.

  9. Measurement of returns to scale in radial DEA models

    Science.gov (United States)

    Krivonozhko, V. E.; Lychev, A. V.; Førsund, F. R.

    2017-01-01

    A general approach is proposed in order to measure returns to scale and scale elasticity at projections points in the radial data envelopment analysis (DEA) models. In the first stage, a relative interior point belonging to the optimal face is found using a special, elaborated method. In previous work it was proved that any relative interior point of a face has the same returns to scale as any other interior point of this face. In the second stage, we propose to determine the returns to scale at the relative interior point found in the first stage.

  10. Phenomenological Aspects of No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss phenomenological aspects of no-scale supergravity inflationary models motivated by compactified string models, in which the inflaton may be identified either as a K\\"ahler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$ that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type $m_0 = B_0 = A_0 = 0$, of the CMSSM type with universal $A_0$ and $m_0 \

  11. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  12. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Brian D. [Oregon State Univ., Corvallis, OR (United States)

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations in computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process

  14. Cancer systems biology and modeling: microscopic scale and multiscale approaches.

    Science.gov (United States)

    Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin

    2015-02-01

    Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  16. Standard model with spontaneously broken quantum scale invariance

    Science.gov (United States)

    Ghilencea, D. M.; Lalak, Z.; Olszewski, P.

    2017-09-01

    We explore the possibility that scale symmetry is a quantum symmetry that is broken only spontaneously and apply this idea to the standard model. We compute the quantum corrections to the potential of the Higgs field (ϕ ) in the classically scale-invariant version of the standard model (mϕ=0 at tree level) extended by the dilaton (σ ). The tree-level potential of ϕ and σ , dictated by scale invariance, may contain nonpolynomial effective operators, e.g., ϕ6/σ2, ϕ8/σ4, ϕ10/σ6, etc. The one-loop scalar potential is scale invariant, since the loop calculations manifestly preserve the scale symmetry, with the dimensional regularization subtraction scale μ generated spontaneously by the dilaton vacuum expectation value μ ˜⟨σ ⟩. The Callan-Symanzik equation of the potential is verified in the presence of the gauge, Yukawa, and the nonpolynomial operators. The couplings of the nonpolynomial operators have nonzero beta functions that we can actually compute from the quantum potential. At the quantum level, the Higgs mass is protected by spontaneously broken scale symmetry, even though the theory is nonrenormalizable. We compare the one-loop potential to its counterpart computed in the "traditional" dimensional regularization scheme that breaks scale symmetry explicitly (μ =constant) in the presence at the tree level of the nonpolynomial operators.

  17. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  18. On Scaling Modes and Balancing Stochastic, Discretization, and Modeling Error

    Science.gov (United States)

    Brown, J.

    2015-12-01

    We consider accuracy-cost tradeoffs and the problem of finding Pareto optimal configurations for stochastic forward and inverse problems. As the target accuracy is changed, we should use different physical models, stochastic models, discretizations, and solution algorithms. In this spectrum, we see different scientifically-relevant scaling modes, thus different opportunities and limitations on parallel computers and emerging architectures.

  19. A Scale Model of Cation Exchange for Classroom Demonstration.

    Science.gov (United States)

    Guertal, E. A.; Hattey, J. A.

    1996-01-01

    Describes a project that developed a scale model of cation exchange that can be used for a classroom demonstration. The model uses kaolinite clay, nails, plywood, and foam balls to enable students to gain a better understanding of the exchange complex of soil clays. (DDR)

  20. Modeling nano-scale grain growth of intermetallics

    Indian Academy of Sciences (India)

    Administrator

    Abstract. The Monte Carlo simulation is utilized to model the nano-scale grain growth of two nano- crystalline materials, Pd81Zr19 and RuAl. In this regard, the relationship between the real time and the time unit of simulation, i.e. Monte Carlo step (MCS), is determined. The results of modeling show that with increasing time ...

  1. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    modeling strategies that bridge the length-scales. The quasicontinuum method pivots on a strategy which attempts to take advantage of both conventional atomistic simulations and continuum mechanics to develop a seamless methodology for the modeling of defects such as dislocations, grain boundaries and cracks, and ...

  2. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  3. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Abstract. Modelling the evolution of a financial index as a stochastic process is a prob- lem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  4. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  5. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  6. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  7. Observed Scaling in Clouds and Precipitation and Scale Incognizance in Regional to Global Atmospheric Models

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Travis A.; Li, Fuyu; Collins, William D.; Rauscher, Sara; Ringler, Todd; Taylor, Mark; Hagos, Samson M.; Leung, Lai-Yung R.

    2013-12-01

    We use observations of robust scaling behavior in clouds and precipitation to derive constraints on how partitioning of precipitation should change with model resolution. Our analysis indicates that 90-99% of stratiform precipitation should occur in clouds that are resolvable by contemporary climate models (e.g., with 200 km or finer grid spacing). Furthermore, this resolved fraction of stratiform precipitation should increase sharply with resolution, such that effectively all stratiform precipitation should be resolvable above scales of ~50 km. We show that the Community Atmosphere Model (CAM) and the Weather Research and Forecasting (WRF) model also exhibit the robust cloud and precipitation scaling behavior that is present in observations, yet the resolved fraction of stratiform precipitation actually decreases with increasing model resolution. A suite of experiments with multiple dynamical cores provides strong evidence that this `scale-incognizant' behavior originates in one of the CAM4 parameterizations. An additional set of sensitivity experiments rules out both convection parameterizations, and by a process of elimination these results implicate the stratiform cloud and precipitation parameterization. Tests with the CAM5 physics package show improvements in the resolution-dependence of resolved cloud fraction and resolved stratiform precipitation fraction.

  8. Multi-scale atmospheric composition modelling for the Balkan region

    Science.gov (United States)

    Ganev, Kostadin; Syrakov, Dimiter; Todorova, Angelina; Prodanova, Maria; Atanasov, Emanouil; Gurov, Todor; Karaivanova, Aneta; Miloshev, Nikolai; Gadzhev, Georgi; Jordanov, Georgi

    2010-05-01

    Overview The present work describes the progress in developing of an integrated, multi-scale Balkan region oriented modeling system. The main activities and achievements at this stage of the work are: Creating, enriching and updating the necessary physiographic, emission and meteorological data bases; Installation of the models for GRID application, model tuning and validation; Extensive numerical simulations on regional (Balkan Peninsula) and local (Bulgaria) scales. Objevtives: The present work describes the progress of an application developed by the Environmental VO of the 7FP project SEE-GRID eInfrastructure for regional eScience. The application aims at developing of an integrated, multi-scale Balkan region oriented modelling system, which would be able to: -Study the atmospheric pollution transport and transformation processes (accounting also for heterogeneous chemistry and the importance of aerosols for air quality and climate) from urban to local to regional (Balkan) scales; -Track and characterize the main pathways and processes that lead to atmospheric composition formation in different scales; -Account for the biosphere-atmosphere exchange as a source and receptor of atmospheric chemical species; -Provide high quality scientifically robust assessments of the air quality and its origin, thus facilitating formulation of pollution mitigation strategies at national and Balkan level. The application is based on US EPA Models-3 system. Description of work: The main activities and achievements at this still preparatory stage of the work are: 1.) Creating, enriching and updating the necessary physiographic, emission and meteorological data bases 2.) Installation of the models for GRID application, model tuning and validation, numerical experiments and interpretation of the results: The US EPA Models 3 system is installed; software for emission speciation and for introducing emission temporal profiles is created, a procedure for calculating biogenic VOC

  9. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-04-05

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The

  10. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  11. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  12. Intermediate time scaling in classical continuous-spin models

    CERN Document Server

    Oh, S K; Chung, J S

    1999-01-01

    The time-dependent total spin correlation functions of the two- and the three-dimensional classical XY models seem to have a very narrow first dynamic scaling interval and, after this interval, a much broader anomalous second dynamic scaling interval appears. In this paper, this intriguing feature found in our previous work is re-examined. By introducing a phenomenological characteristic time for this intermediate time interval, the second dynamic scaling behavior can be explained. Moreover, the dynamic critical exponent found from this novel characteristic time is found to be identical to that found from the usual dynamic scaling theory developed in the wave vector and frequency domain. For continuous spin models, in which the spin variable related to a long-range order parameter is not a constant of motion, our method yielded the dynamic critical exponent with less computational efforts.

  13. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  14. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  15. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Science.gov (United States)

    Nahum, Adam; Chalker, J. T.; Serna, P.; Ortuño, M.; Somoza, A. M.

    2015-10-01

    Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel-valence-bond-solid (VBS) transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter). We use the loop model for simulations of unprecedentedly large systems (up to linear size L =512 ). Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions ηVBS and ηN é el yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP n -1 field theory (the n -component Abelian Higgs model) between two and four dimensions, we give the simplest scenario by which an anomalously weak first

  16. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  17. Criticality in the scale invariant standard model (squared

    Directory of Open Access Journals (Sweden)

    Robert Foot

    2015-07-01

    Full Text Available We consider first the standard model Lagrangian with μh2 Higgs potential term set to zero. We point out that this classically scale invariant theory potentially exhibits radiative electroweak/scale symmetry breaking with very high vacuum expectation value (VEV for the Higgs field, 〈ϕ〉≈1017–18 GeV. Furthermore, if such a vacuum were realized then cancellation of vacuum energy automatically implies that this nontrivial vacuum is degenerate with the trivial unbroken vacuum. Such a theory would therefore be critical with the Higgs self-coupling and its beta function nearly vanishing at the symmetry breaking minimum, λ(μ=〈ϕ〉≈βλ(μ=〈ϕ〉≈0. A phenomenologically viable model that predicts this criticality property arises if we consider two copies of the standard model Lagrangian, with exact Z2 symmetry swapping each ordinary particle with a partner. The spontaneously broken vacuum can then arise where one sector gains the high scale VEV, while the other gains the electroweak scale VEV. The low scale VEV is perturbed away from zero due to a Higgs portal coupling, or via the usual small Higgs mass terms μh2, which softly break the scale invariance. In either case, the cancellation of vacuum energy requires Mt=(171.53±0.42 GeV, which is close to its measured value of (173.34±0.76 GeV.

  18. Computational Modelling of Cancer Development and Growth: Modelling at Multiple Scales and Multiscale Modelling.

    Science.gov (United States)

    Szymańska, Zuzanna; Cytowski, Maciej; Mitchell, Elaine; Macnamara, Cicely K; Chaplain, Mark A J

    2017-06-20

    In this paper, we present two mathematical models related to different aspects and scales of cancer growth. The first model is a stochastic spatiotemporal model of both a synthetic gene regulatory network (the example of a three-gene repressilator is given) and an actual gene regulatory network, the NF-[Formula: see text]B pathway. The second model is a force-based individual-based model of the development of a solid avascular tumour with specific application to tumour cords, i.e. a mass of cancer cells growing around a central blood vessel. In each case, we compare our computational simulation results with experimental data. In the final discussion section, we outline how to take the work forward through the development of a multiscale model focussed at the cell level. This would incorporate key intracellular signalling pathways associated with cancer within each cell (e.g. p53-Mdm2, NF-[Formula: see text]B) and through the use of high-performance computing be capable of simulating up to [Formula: see text] cells, i.e. the tissue scale. In this way, mathematical models at multiple scales would be combined to formulate a multiscale computational model.

  19. Automation on the generation of genome-scale metabolic models.

    Science.gov (United States)

    Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández

    2012-12-01

    Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.

  20. ScaleNet: A literature-based model of scale insect biology and systematics

    Science.gov (United States)

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found in all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis, and plant-insect i...

  1. From Field- to Landscape-Scale Vadose Zone Processes: Scale Issues, Modeling, and Monitoring

    NARCIS (Netherlands)

    Corwin, D.L.; Hopmans, J.; Rooij, de G.H.

    2006-01-01

    Modeling and monitoring vadose zone processes across multiple scales is a fundamental component of many environmental and natural resource issues including nonpoint source (NPS) pollution, watershed management, and nutrient management, to mention just a few. In this special section in Vadose Zone

  2. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  3. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  4. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...

  5. Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models

    Science.gov (United States)

    Tsao, Jen-Ching; Lee, Sam

    2012-01-01

    Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.

  6. Multiple time scales in multi-state models.

    Science.gov (United States)

    Iacobelli, Simona; Carstensen, Bendix

    2013-12-30

    In multi-state models, it has been the tradition to model all transition intensities on one time scale, usually the time since entry into the study ('clock-forward' approach). The effect of time since an intermediate event has been accommodated either by changing the time scale to time since entry to the new state ('clock-back' approach) or by including the time at entry to the new state as a covariate. In this paper, we argue that the choice of time scale for the various transitions in a multi-state model should be dealt with as an empirical question, as also the question of whether a single time scale is sufficient. We illustrate that these questions are best addressed by using parametric models for the transition rates, as opposed to the traditional Cox-model-based approaches. Specific advantages are that dependence of failure rates on multiple time scales can be made explicit and described in informative graphical displays. Using a single common time scale for all transitions greatly facilitates computations of probabilities of being in a particular state at a given time, because the machinery from the theory of Markov chains can be applied. However, a realistic model for transition rates is preferable, especially when the focus is not on prediction of final outcomes from start but on the analysis of instantaneous risk or on dynamic prediction. We illustrate the various approaches using a data set from stem cell transplant in leukemia and provide supplementary online material in R. Copyright © 2013 John Wiley & Sons, Ltd.

  7. A catchment scale water balance model for FIFE

    Science.gov (United States)

    Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.

    1992-01-01

    A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.

  8. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  9. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  10. Scalar dark matter in scale invariant standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ghorbani, Karim [Physics Department, Faculty of Sciences,Arak University, Arak 38156-8-8349 (Iran, Islamic Republic of); Ghorbani, Hossein [Institute for Research in Fundamental Sciences (IPM),School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-04-05

    We investigate single and two-component scalar dark matter scenarios in classically scale invariant standard model which is free of the hierarchy problem in the Higgs sector. We show that despite the very restricted space of parameters imposed by the scale invariance symmetry, both single and two-component scalar dark matter models overcome the direct and indirect constraints provided by the Planck/WMAP observational data and the LUX/Xenon100 experiment. We comment also on the radiative mass corrections of the classically massless scalon that plays a crucial role in our study.

  11. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction.......Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metabolic...

  12. Low-scale inflation and supersymmetry breaking in racetrack models

    Science.gov (United States)

    Allahverdi, Rouzbeh; Dutta, Bhaskar; Sinha, Kuver

    2010-04-01

    In many moduli stabilization schemes in string theory, the scale of inflation appears to be of the same order as the scale of supersymmetry breaking. For low-scale supersymmetry breaking, therefore, the scale of inflation should also be low, unless this correlation is avoided in specific models. We explore such a low-scale inflationary scenario in a racetrack model with a single modulus in type IIB string theory. Inflation occurs near a point of inflection in the Kähler modulus potential. Obtaining acceptable cosmological density perturbations leads to the introduction of magnetized D7-branes sourcing nonperturbative superpotentials. The gravitino mass, m3/2, is chosen to be around 30 TeV, so that gravitinos that are produced in the inflaton decay do not affect big-bang nucleosynthesis. Supersymmetry is communicated to the visible sector by a mixture of anomaly and modulus mediation. We find that the two sources contribute equally to the gaugino masses, while scalar masses are decided mainly by anomaly contribution. This happens as a result of the low scale of inflation and can be probed at the LHC.

  13. Modelling turbulent boundary layer flow over fractal-like multiscale terrain using large-eddy simulations and analytical tools

    Science.gov (United States)

    Yang, X. I. A.; Meneveau, C.

    2017-03-01

    In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface `underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES. This article is part of the themed issue 'Wind energy in complex terrains'.

  14. Modelling turbulent boundary layer flow over fractal-like multiscale terrain using large-eddy simulations and analytical tools.

    Science.gov (United States)

    Yang, X I A; Meneveau, C

    2017-04-13

    In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  15. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  16. Ares I Scale Model Acoustic Test Overpressure Results

    Science.gov (United States)

    Casiano, M. J.; Alvord, D. A.; McDaniels, D. M.

    2011-01-01

    A summary of the overpressure environment from the 5% Ares I Scale Model Acoustic Test (ASMAT) and the implications to the full-scale Ares I are presented in this Technical Memorandum. These include the scaled environment that would be used for assessing the full-scale Ares I configuration, observations, and team recommendations. The ignition transient is first characterized and described, the overpressure suppression system configuration is then examined, and the final environment characteristics are detailed. The recommendation for Ares I is to keep the space shuttle heritage ignition overpressure (IOP) suppression system (below-deck IOP water in the launch mount and mobile launcher and also the crest water on the main flame deflector) and the water bags.

  17. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    plant [3]. The goal of the project is to utilize realtime data extracted from the large scale facility to formulate and validate first principle dynamic models of the plant. These models are then further exploited to derive model-based tools for process optimization, advanced control and real...... with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...

  18. Design and Modelling of Small Scale Low Temperature Power Cycles

    DEFF Research Database (Denmark)

    Wronski, Jorrit

    he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance. The t...... scale plate heat exchanger. Working towards a validation of heat transfer correlations for ORC conditions, a new test rig was designed and built. The test facility can be used to study heat transfer in both ORC and high temperature heat pump systems.......he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance...

  19. A scale-free neural network for modelling neurogenesis

    Science.gov (United States)

    Perotti, Juan I.; Tamarit, Francisco A.; Cannas, Sergio A.

    2006-11-01

    In this work we introduce a neural network model for associative memory based on a diluted Hopfield model, which grows through a neurogenesis algorithm that guarantees that the final network is a small-world and scale-free one. We also analyze the storage capacity of the network and prove that its performance is larger than that measured in a randomly dilute network with the same connectivity.

  20. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  1. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  2. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  3. Large-Scale Modeling of Wordform Learning and Representation

    Science.gov (United States)

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  4. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  5. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  6. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized...... redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases....... Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource...

  7. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura

    2013-12-30

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.

  8. Disappearing scales in carps: re-visiting Kirpichnikov's model on the genetics of scale pattern formation.

    Directory of Open Access Journals (Sweden)

    Laura Casas

    Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.

  9. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  10. MODELLING FINE SCALE MOVEMENT CORRIDORS FOR THE TRICARINATE HILL TURTLE

    Directory of Open Access Journals (Sweden)

    I. Mondal

    2016-06-01

    Full Text Available Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata, focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127–175 mm and home range (8000–15000 m2, with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  11. Modelling Fine Scale Movement Corridors for the Tricarinate Hill Turtle

    Science.gov (United States)

    Mondal, I.; Kumar, R. S.; Habib, B.; Talukdar, G.

    2016-06-01

    Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata), focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127-175 mm) and home range (8000-15000 m2), with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  12. Genome-scale constraint-based modeling of Geobacter metallireducens

    Directory of Open Access Journals (Sweden)

    Famili Iman

    2009-01-01

    Full Text Available Abstract Background Geobacter metallireducens was the first organism that can be grown in pure culture to completely oxidize organic compounds with Fe(III oxide serving as electron acceptor. Geobacter species, including G. sulfurreducens and G. metallireducens, are used for bioremediation and electricity generation from waste organic matter and renewable biomass. The constraint-based modeling approach enables the development of genome-scale in silico models that can predict the behavior of complex biological systems and their responses to the environments. Such a modeling approach was applied to provide physiological and ecological insights on the metabolism of G. metallireducens. Results The genome-scale metabolic model of G. metallireducens was constructed to include 747 genes and 697 reactions. Compared to the G. sulfurreducens model, the G. metallireducens metabolic model contains 118 unique reactions that reflect many of G. metallireducens' specific metabolic capabilities. Detailed examination of the G. metallireducens model suggests that its central metabolism contains several energy-inefficient reactions that are not present in the G. sulfurreducens model. Experimental biomass yield of G. metallireducens growing on pyruvate was lower than the predicted optimal biomass yield. Microarray data of G. metallireducens growing with benzoate and acetate indicated that genes encoding these energy-inefficient reactions were up-regulated by benzoate. These results suggested that the energy-inefficient reactions were likely turned off during G. metallireducens growth with acetate for optimal biomass yield, but were up-regulated during growth with complex electron donors such as benzoate for rapid energy generation. Furthermore, several computational modeling approaches were applied to accelerate G. metallireducens research. For example, growth of G. metallireducens with different electron donors and electron acceptors were studied using the genome-scale

  13. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    Science.gov (United States)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  14. Integrating land management into Earth system models: the importance of land use transitions at sub-grid-scale

    Science.gov (United States)

    Pongratz, Julia; Wilkenskjeld, Stiig; Kloster, Silvia; Reick, Christian

    2014-05-01

    Recent studies indicate that changes in surface climate and carbon fluxes caused by land management (i.e., modifications of vegetation structure without changing the type of land cover) can be as large as those caused by land cover change. Further, such effects may occur on substantial areas: while about one quarter of the land surface has undergone land cover change, another fifty percent are managed. This calls for integration of management processes in Earth system models (ESMs). This integration increases the importance of awareness and agreement on how to diagnose effects of land use in ESMs to avoid additional model spread and thus unnecessary uncertainties in carbon budget estimates. Process understanding of management effects, their model implementation, as well as data availability on management type and extent pose challenges. In this respect, a significant step forward has been done in the framework of the current IPCC's CMIP5 simulations (Coupled Model Intercomparison Project Phase 5): The climate simulations were driven with the same harmonized land use dataset that, different from most datasets commonly used before, included information on two important types of management: wood harvest and shifting cultivation. However, these new aspects were employed by only part of the CMIP5 models, while most models continued to use the associated land cover maps. Here, we explore the consequences for the carbon cycle of including subgrid-scale land transformations ("gross transitions"), such as shifting cultivation, as example of the current state of implementation of land management in ESMs. Accounting for gross transitions is expected to increase land use emissions because it represents simultaneous clearing and regrowth of natural vegetation in different parts of the grid cell, reducing standing carbon stocks. This process cannot be captured by prescribing land cover maps ("net transitions"). Using the MPI-ESM we find that ignoring gross transitions

  15. Large scale modelling of catastrophic floods in Italy

    Science.gov (United States)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  16. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    of nodes with a shared connectivity pattern. Modelling the brain in great detail on a whole-brain scale is essential to fully understand the underlying organization of the brain and reveal the relations between structure and function, that allows sophisticated cognitive behaviour to emerge from ensembles...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  17. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  18. Classical scale invariance in the inert doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Plascencia, Alexis D. [Institute for Particle Physics Phenomenology, Department of Physics,Durham University, Durham DH1 3LE (United Kingdom)

    2015-09-04

    The inert doublet model (IDM) is a minimal extension of the Standard Model (SM) that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a U(1){sub CW} gauge symmetry and a complex scalar Φ. The mass scale is generated in the hidden sector via the Coleman-Weinberg (CW) mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the Z{sup ′} gauge boson on the calculation of the dark matter relic density and on the spin-independent nucleon cross section for direct detection experiments. Finally, by studying the RG equations we find regions in parameter space which remain valid all the way up to the Planck scale.

  19. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  20. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  1. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Directory of Open Access Journals (Sweden)

    Adam Nahum

    2015-12-01

    Full Text Available Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel–valence-bond-solid (VBS transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter. We use the loop model for simulations of unprecedentedly large systems (up to linear size L=512. Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions η_{VBS} and η_{Néel} yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP^{n-1} field theory (the n-component Abelian Higgs model between two and four dimensions, we give the simplest scenario by which an

  2. Modelling galaxy merger time-scales and tidal destruction

    Science.gov (United States)

    Simha, Vimal; Cole, Shaun

    2017-12-01

    We present a model for the dynamical evolution of subhaloes based on an approach combining numerical and analytical methods. Our method is based on tracking subhaloes in an N-body simulation up to the latest epoch that it can be resolved, and applying an analytic prescription for its merger time-scale that takes dynamical friction and tidal disruption into account. When applied to cosmological N-body simulations with mass resolutions that differ by two orders of magnitude, the technique produces halo occupation distributions that agree to within 3 per cent. This model has now been implemented in the GALFORM semi-analytic model of galaxy formation.

  3. Toward multi-scale computational modeling in developmental disability research.

    Science.gov (United States)

    Dammann, O; Follett, P

    2011-06-01

    The field of theoretical neuroscience is gaining increasing recognition. Virtually all areas of neuroscience offer potential linkage points for computational work. In developmental neuroscience, main areas of research are neural development and connectivity, and connectionist modeling of cognitive development. In this paper, we suggest that computational models can be helpful tools for understanding the pathogenesis and consequences of perinatal brain damage and subsequent developmental disability. In particular, designing multi-scale computational models should be considered by developmental neuroscientists interested in helping reduce the risk for developmental disabilities. Georg Thieme Verlag Stuttgart · New york.

  4. Atmospheric CO2 modeling at the regional scale: an intercomparison of 5 meso-scale atmospheric models

    Directory of Open Access Journals (Sweden)

    G. Pérez-Landa

    2007-12-01

    Full Text Available Atmospheric CO2 modeling in interaction with the surface fluxes, at the regional scale is developed within the frame of the European project CarboEurope-IP and its Regional Experiment component. In this context, five meso-scale meteorological models at 2 km resolution participate in an intercomparison exercise. Using a common experimental protocol that imposes a large number of rules, two days of the CarboEurope Regional Experiment Strategy (CERES campaign are simulated. A systematic evaluation of the models is done in confrontation with the observations, using statistical tools and direct comparisons. Thus, temperature and relative humidity at 2 m, wind direction, surface energy and CO2 fluxes, vertical profiles of potential temperature as well as in-situ CO2 concentrations comparisons between observations and simulations are examined. These comparisons reveal a cold bias in the simulated temperature at 2 m, the latent heat flux is often underestimated. Nevertheless, the CO2 concentrations heterogeneities are well captured by most of the models. This intercomparison exercise shows also the models ability to represent the meteorology and carbon cycling at the synoptic and regional scale in the boundary layer, but also points out some of the major shortcomings of the models.

  5. Increasing process integrity in global scale water balance models

    Science.gov (United States)

    Plöger, Lisa; Mewes, Benjamin; Oppel, Henning; Schumann, Andreas

    2017-04-01

    Hydrological models on a global or continental scale are often used to model human impact on the water balance in data scarce regions. Therefore, they are not validated for a time series of runoff measured at gauges but for long term estimates. The simplistic model GlobWat was introduced by the FAO to predict irrigation water demand based on open source data for continental catchments. Originally, the model was not designed to process time series, but to estimate the water demand on long-time averages of precipitation and evapotranspiration. Therefore the emphasis of detail of GlobWat was focused on crop evapotranspiration and water availability in agricultural regions. In our study we wanted to enhance the modelling in detail to forest evapotranspiration on the one hand and to time series simulation on the other hand. Meanwhile, we tried to keep the amount of input data as small as possible or at least limit it to open source data. Our objectives derived from case studies in the forest dominated catchments of Danube and Mississippi. With the use of Penman-Montheith equation as fundamental equation within the original GlobWat model, evapotranspiration losses in these regions could not be simulated adequately. As this being the fact, the water availability of downstream regions dominated by agriculture might be overestimated and hence estimation of irrigation demands biased. Therefore, we implemented a Shuttleworth & Calder as well as a Priestly-Taylor approach for evapotranspiration calculation of forested areas. Both models are compared and evaluated based on monthly time series validation of the model with runoff series provided by GRDC (Global Runoff Data Center). For an additional extension of the model we added a simple one-parameter snow-routine. In our presentation we compare the different stages of modelling to demonstrate the options to extent and validate these models with observed data on an appropriate scale.

  6. Comparing turbulent mixing of biogenic VOC across model scale

    Science.gov (United States)

    Li, Y.; Barth, M. C.; Steiner, A. L.

    2016-12-01

    Vertical mixing of biogenic volatile organic compounds (BVOC) in the planetary boundary layer (PBL) is very important in simulating the formation of ozone, secondary organic aerosols (SOA), and climate feedbacks. To assess the representation of vertical mixing in the atmosphere for the Baltimore-Washington DISCOVER-AQ 2011 campaign, we use two models of different scale and turbulence representation: (1) the National Center for Atmospheric Research's Large Eddy Simulation (LES), and (2) the Weather Research and Forecasting-Chemistry (WRF-Chem) model to simulate regional meteorology and chemistry. For WRF-Chem, we evaluate the boundary layer schemes in the model at convection-permitting scales (4km). WRF-Chem simulated vertical profiles are compared with the results from turbulence-resolving LES model under similar meteorological and chemical environment. The influence of clouds on gas and aqueous species and the impact of cloud processing at both scales are evaluated. Temporal evolutions of a surface-to-cloud concentration ratio are calculated to determine the capability of BVOC vertical mixing in WRF-Chem.

  7. Large-scale model of mammalian thalamocortical systems.

    Science.gov (United States)

    Izhikevich, Eugene M; Edelman, Gerald M

    2008-03-04

    The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

  8. Bed form dynamics in distorted lightweight scale models

    Science.gov (United States)

    Aberle, Jochen; Henning, Martin; Ettmer, Bernd

    2016-04-01

    The adequate prediction of flow and sediment transport over bed forms presents a major obstacle for the solution of sedimentation problems in alluvial channels because bed forms affect hydraulic resistance, sediment transport, and channel morphodynamics. Moreover, bed forms can affect hydraulic habitat for biota, may introduce severe restrictions to navigation, and present a major problem for engineering structures such as water intakes and groynes. The main body of knowledge on the geometry and dynamics of bed forms such as dunes originates from laboratory and field investigations focusing on bed forms in sand bed rivers. Such investigations enable insight into the physics of the transport processes, but do not allow for the long term simulation of morphodynamic development as required to assess, for example, the effects of climate change on river morphology. On the other hand, this can be achieved through studies with distorted lightweight scale models allowing for the modification of the time scale. However, our understanding of how well bed form geometry and dynamics, and hence sediment transport mechanics, are reproduced in such models is limited. Within this contribution we explore this issue using data from investigations carried out at the Federal Waterways and Research Institute in Karlsruhe, Germany in a distorted lightweight scale model of the river Oder. The model had a vertical scale of 1:40 and a horizontal scale of 1:100, the bed material consisted of polystyrene particles, and the resulting dune geometry and dynamics were measured with a high spatial and temporal resolution using photogrammetric methods. Parameters describing both the directly measured and up-scaled dune geometry were determined using the random field approach. These parameters (e.g., standard deviation, skewness, kurtosis) will be compared to prototype observations as well as to results from the literature. Similarly, parameters describing the lightweight bed form dynamics, which

  9. Comprehensive Approaches to Multiphase Flows in Geophysics - Application to nonisothermal, nonhomogenous, unsteady, large-scale, turbulent dusty clouds I. Hydrodynamic and Thermodynamic RANS and LES Models

    Energy Technology Data Exchange (ETDEWEB)

    S. Dartevelle

    2005-09-05

    The objective of this manuscript is to fully derive a geophysical multiphase model able to ''accommodate'' different multiphase turbulence approaches; viz., the Reynolds Averaged Navier-Stokes (RANS), the Large Eddy Simulation (LES), or hybrid RANSLES. This manuscript is the first part of a larger geophysical multiphase project--lead by LANL--that aims to develop comprehensive modeling tools for large-scale, atmospheric, transient-buoyancy dusty jets and plume (e.g., plinian clouds, nuclear ''mushrooms'', ''supercell'' forest fire plumes) and for boundary-dominated geophysical multiphase gravity currents (e.g., dusty surges, diluted pyroclastic flows, dusty gravity currents in street canyons). LES is a partially deterministic approach constructed on either a spatial- or a temporal-separation between the large and small scales of the flow, whereas RANS is an entirely probabilistic approach constructed on a statistical separation between an ensemble-averaged mean and higher-order statistical moments (the so-called ''fluctuating parts''). Within this specific multiphase context, both turbulence approaches are built up upon the same phasic binary-valued ''function of presence''. This function of presence formally describes the occurrence--or not--of any phase at a given position and time and, therefore, allows to derive the same basic multiphase Navier-Stokes model for either the RANS or the LES frameworks. The only differences between these turbulence frameworks are the closures for the various ''turbulence'' terms involving the unknown variables from the fluctuating (RANS) or from the subgrid (LES) parts. Even though the hydrodynamic and thermodynamic models for RANS and LES have the same set of Partial Differential Equations, the physical interpretations of these PDEs cannot be the same, i.e., RANS models an averaged field, while LES simulates a

  10. Compare pilot-scale and industry-scale models of pulverized coal combustion in an ironmaking blast furnace

    Science.gov (United States)

    Shen, Yansong; Yu, Aibing; Zulli, Paul

    2013-07-01

    In order to understand the complex phenomena of pulverized coal injection (PCI) process in blast furnace (BF), mathematical models have been developed at different scales: pilot-scale model of coal combustion and industry-scale model (in-furnace model) of coal/coke combustion in a real BF respectively. This paper compares these PCI models in aspects of model developments and model capability. The model development is discussed in terms of model formulation, their new features and geometry/regions considered. The model capability is then discussed in terms of main findings followed by the model evaluation on their advantages and limitations. It is indicated that these PCI models are all able to describe PCI operation qualitatively. The in-furnace model is more reliable for simulating in-furnace phenomena of PCI operation qualitatively and quantitatively. These models are useful for understanding the flow-thermo-chemical behaviors and then optimizing the PCI operation in practice.

  11. Multi-scale modeling of the CD8 immune response

    Science.gov (United States)

    Barbarroux, Loic; Michel, Philippe; Adimy, Mostafa; Crauste, Fabien

    2016-06-01

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  12. Multi-scale modeling of the CD8 immune response

    Energy Technology Data Exchange (ETDEWEB)

    Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Adimy, Mostafa, E-mail: mostafa.adimy@inria.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France); Crauste, Fabien, E-mail: crauste@math.univ-lyon1.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France)

    2016-06-08

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  13. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    Science.gov (United States)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons

  14. Modelling hydrological processes at different scales across Russian permafrost domain

    Science.gov (United States)

    Makarieva, Olga; Lebedeva, Lyudmila; Nesterova, Natalia; Vinogradova, Tatyana

    2017-04-01

    The project aims to study the interactions between permafrost and runoff generation processes across Russian Arctic domain based on hydrological modelling. The uniqueness of the approach is a unified modelling framework which allows for coupled simulations of upper permafrost dynamics and streamflow generation at different scales (from soil column to large watersheds). The base of the project is hydrological model Hydrograph (Vinogradov et al. 2011, Semenova et al. 2013, 2015; Lebedeva et al., 2015). The model algorithms combine physically-based and conceptual approaches for the description of land hydrological cycle processes, which allows for maintaining a balance between the complexity of model design and the use of limited input information. The method for modeling heat dynamics in soil is integrated into the model. Main parameters of the model are the physical properties of landscapes that may be measured (observed) in nature and are classified according to the types of soil, vegetation and other characteristics. A set of parameters specified in the studied catchments (basins analog) can be transferred to ungauged basins with similar types of the underlying surface without calibration. The results of modelling from small research watersheds to large poorly gauged river basins in different climate and landscape settings of Russian Arctic (within the Yenisey, Lena, Yana, Indigirka, Kolyma rivers basins) will be presented. Based on gained experience methodological aspects of hydrological modelling approaches in permafrost environment will be discussed. The study is partially supported by Russian Foundation for Basic Research, projects 16-35-50151 and 17-05-01138.

  15. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  16. Experimental exploration of diffusion panel labyrinth in scale model

    Science.gov (United States)

    Vance, Mandi M.

    Small rehearsal and performance venues often lack the rich reverberation found in larger spaces. Higini Arau-Puchades has designed and implemented a system of diffusion panels in the Orchestra Rehearsal Room at the Great Theatre Liceu and the Tonhalle St. Gallen that lengthen the reverberation time. These panels defy traditional room acoustics theory which holds that adding material to a room will shorten the reverberation time. This work explores several versions of Arau-Puchades' panels and room characteristics in scale model. Reverberation times are taken from room impulse response measurements in order to better understand the unusual phenomenon. Scale modeling enables many tests but has limitations in its accuracy due to the higher frequency range involved. Further investigations are necessary to establish how the sound energy interacts with the diffusion panels and confirm their validity in a range of applications.

  17. A Rasch Model Analysis of the Mindful Attention Awareness Scale.

    Science.gov (United States)

    Goh, Hong Eng; Marais, Ida; Ireland, Michael James

    2017-04-01

    The Mindful Attention Awareness Scale was developed to measure individual differences in the tendency to be mindful. The current study examined the psychometric properties of the Mindful Attention Awareness Scale in a heterogeneous sample of 565 nonmeditators and 612 meditators using the polytomous Rasch model. The results showed that some items did not function the same way for these two groups. Overall, meditators had higher mean estimates than nonmeditators. The analysis identified a group of items as highly discriminating. Using a different model, Van Dam, Earleywine, and Borders in 2010 identified the same group of items as highly discriminating, and concluded that they were the items with the most information. Multiple pieces of evidence from the Rasch analysis showed that these items discriminate highly because of local dependence, hence do not supply independent information. We discussed how these different conclusions, based on similar findings, result from two very different paradigms in measurement.

  18. Next-generation genome-scale models for metabolic engineering

    DEFF Research Database (Denmark)

    King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....

  19. Vegetable parenting practices scale. Item response modeling analyses.

    Science.gov (United States)

    Chen, Tzu-An; O'Connor, Teresia M; Hughes, Sheryl O; Beltran, Alicia; Baranowski, Janice; Diep, Cassandra; Baranowski, Tom

    2015-08-01

    To evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We also tested for differences in the ways item function (called differential item functioning) across child's gender, ethnicity, age, and household income groups. Parents of 3-5 year old children completed a self-reported vegetable parenting practices scale online. Vegetable parenting practices consisted of 14 effective vegetable parenting practices and 12 ineffective vegetable parenting practices items, each with three subscales (responsiveness, structure, and control). Multidimensional polytomous item response modeling was conducted separately on effective vegetable parenting practices and ineffective vegetable parenting practices. One effective vegetable parenting practice item did not fit the model well in the full sample or across demographic groups, and another was a misfit in differential item functioning analyses across child's gender. Significant differential item functioning was detected across children's age and ethnicity groups, and more among effective vegetable parenting practices than ineffective vegetable parenting practices items. Wright maps showed items only covered parts of the latent trait distribution. The harder- and easier-to-respond ends of the construct were not covered by items for effective vegetable parenting practices and ineffective vegetable parenting practices, respectively. Several effective vegetable parenting practices and ineffective vegetable parenting practices scale items functioned differently on the basis of child's demographic characteristics; therefore, researchers should use these vegetable parenting practices scales with caution. Item response modeling should be incorporated in analyses of parenting practice questionnaires to better assess

  20. A dynamic similarity model for large eddy simulation of turbulent combustion

    Science.gov (United States)

    Jaberi, F. A.; James, S.

    1998-07-01

    A dynamic similarity subgrid-scale (SGS) unmixedness model is presented for large eddy simulation (LES) of turbulent reacting flows. The model is assessed both a priori and a posteriori via data obtained by direct numerical simulations (DNS) of homogeneous compressible turbulent flows involving a single step Arrhenius reaction. The results of a priori analysis indicate that the local values of the SGS unmixedness are accurately predicted by the model. A posteriori results also indicate that the statistics of the resolved temperature and scalars as obtained by LES compare favorably with DNS values.

  1. Modeling basin- and plume-scale processes of CO2 storage for full-scale deployment

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Q.; Birkholzer, J.T.; Mehnert, E.; Lin, Y.-F.; Zhang, K.

    2009-08-15

    Integrated modeling of basin- and plume-scale processes induced by full-scale deployment of CO{sub 2} storage was applied to the Mt. Simon Aquifer in the Illinois Basin. A three-dimensional mesh was generated with local refinement around 20 injection sites, with approximately 30 km spacing. A total annual injection rate of 100 Mt CO{sub 2} over 50 years was used. The CO{sub 2}-brine flow at the plume scale and the single-phase flow at the basin scale were simulated. Simulation results show the overall shape of a CO{sub 2} plume consisting of a typical gravity-override subplume in the bottom injection zone of high injectivity and a pyramid-shaped subplume in the overlying multilayered Mt. Simon, indicating the important role of a secondary seal with relatively low-permeability and high-entry capillary pressure. The secondary-seal effect is manifested by retarded upward CO{sub 2} migration as a result of multiple secondary seals, coupled with lateral preferential CO{sub 2} viscous fingering through high-permeability layers. The plume width varies from 9.0 to 13.5 km at 200 years, indicating the slow CO{sub 2} migration and no plume interference between storage sites. On the basin scale, pressure perturbations propagate quickly away from injection centers, interfere after less than 1 year, and eventually reach basin margins. The simulated pressure buildup of 35 bar in the injection area is not expected to affect caprock geomechanical integrity. Moderate pressure buildup is observed in Mt. Simon in northern Illinois. However, its impact on groundwater resources is less than the hydraulic drawdown induced by long-term extensive pumping from overlying freshwater aquifers.

  2. Ecohydrologic Modeling of Hillslope Scale Processes in Dryland Ecosystems

    Science.gov (United States)

    Franz, T. E.; King, E. G.; Lester, A.; Caylor, K. K.; Nordbotten, J.; Celia, M. A.; Rodriguez-Iturbe, I.

    2008-12-01

    Dryland ecosystem processes are governed by complex interactions between the atmosphere, soil, and vegetation that are tightly coupled through the mass balance of water. At the scale of individual hillslopes, mass balance of water is dominated by mechanisms of water redistribution which require spatially explicit representation. Fully-resolved physical models of surface and subsurface processes require numerical routines that are not trivial to solve for the spatial (hillslope) and temporal (many plant generations) scales of ecohydrologic interest. In order to reduce model complexity, we have used small-scale field data to derive empirical surface flux terms for representative patches (bare soil, grass, and tree) in a dryland ecosystem of central Kenya. The model is coupled spatially in the subsurface by an analytical solution to the Boussinesq equation for a sloping slab. The semi-analytical model is spatially explicit driven by pulses of precipitation over a simulation period that represents many plant generations. By examining long-term model dynamics, we are able to investigate the principles of self-organization and optimization (maximization of plant water use and minimization of water lost to the system) of dryland ecosystems for various initial conditions and climatic variability. Precipitation records in central Kenya reveal a shift to more intense infrequent rain events with a constant annual total. The range of stable solutions of initial conditions and climatic variability are important to land management agencies for addressing current grazing practices and future policies. The model is a quantitative tool for addressing perturbations to the system and the overall sustainability of pastoralist activities in dryland ecosystems.

  3. Disaggregation, aggregation and spatial scaling in hydrological modelling

    Science.gov (United States)

    Becker, Alfred; Braun, Peter

    1999-04-01

    A typical feature of the land surface is its heterogeneity in terms of the spatial variability of land surface characteristics and parameters controlling physical/hydrological, biological, and other related processes. Different forms and degrees of heterogeneity need to be taken into account in hydrological modelling. The first part of the article concerns the conditions under which a disaggregation of the land surface into subareas of uniform or "quasihomogeneous" behaviour (hydrotopes or hydrological response units - HRUs) is indispensable. In a case study in northern Germany, it is shown that forests in contrast to arable land, areas with shallow groundwater in contrast to those with deep, water surfaces and sealed areas should generally be distinguished (disaggregated) in modelling, whereas internal heterogeneities within these hydrotopes can be assessed statistically, e.g., by areal distribution functions (soil water holding capacity, hydraulic conductivity, etc.). Models with hydrotope-specific parameters can be applied to calculate the "vertical" processes (fluxes, storages, etc.), and this, moreover, for hydrotopes of different area, and even for groups of distributed hydrotopes in a reference area (hydrotope classes), provided that the meteorological conditions are similar. Thus, a scaling problem does not really exist in this process domain. The primary domain for the application of scaling laws is that of lateral flows in landscapes and river basins. This is illustrated in the second part of the article, where results of a case study in Bavaria/Germany are presented and discussed. It is shown that scaling laws can be applied efficiently for the determination of the Instantaneous Unit Hydrograph (IUH) of the surface runoff system in river basins: simple scaling for basins larger than 43 km 2, and multiple scaling for smaller basins. Surprisingly, only two parameters were identified as important in the derived relations: the drainage area and, in some

  4. Regional scale hydrology with a new land surface processes model

    Science.gov (United States)

    Laymon, Charles; Crosson, William

    1995-01-01

    Through the CaPE Hydrometeorology Project, we have developed an understanding of some of the unique data quality issues involved in assimilating data of disparate types for regional-scale hydrologic modeling within a GIS framework. Among others, the issues addressed here include the development of adequate validation of the surface water budget, implementation of the STATSGO soil data set, and implementation of a remote sensing-derived landcover data set to account for surface heterogeneity. A model of land surface processes has been developed and used in studies of the sensitivity of surface fluxes and runoff to soil and landcover characterization. Results of these experiments have raised many questions about how to treat the scale-dependence of land surface-atmosphere interactions on spatial and temporal variability. In light of these questions, additional modifications are being considered for the Marshall Land Surface Processes Model. It is anticipated that these techniques can be tested and applied in conjunction with GCIP activities over regional scales.

  5. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  6. Modeling Biology Spanning Different Scales: An Open Challenge

    Directory of Open Access Journals (Sweden)

    Filippo Castiglione

    2014-01-01

    Full Text Available It is coming nowadays more clear that in order to obtain a unified description of the different mechanisms governing the behavior and causality relations among the various parts of a living system, the development of comprehensive computational and mathematical models at different space and time scales is required. This is one of the most formidable challenges of modern biology characterized by the availability of huge amount of high throughput measurements. In this paper we draw attention to the importance of multiscale modeling in the framework of studies of biological systems in general and of the immune system in particular.

  7. Model Predictive Control for a Small Scale Unmanned Helicopter

    Directory of Open Access Journals (Sweden)

    Jianfu Du

    2008-11-01

    Full Text Available Kinematical and dynamical equations of a small scale unmanned helicoper are presented in the paper. Based on these equations a model predictive control (MPC method is proposed for controlling the helicopter. This novel method allows the direct accounting for the existing time delays which are used to model the dynamics of actuators and aerodynamics of the main rotor. Also the limits of the actuators are taken into the considerations during the controller design. The proposed control algorithm was verified in real flight experiments where good perfomance was shown in postion control mode.

  8. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    Science.gov (United States)

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  9. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models.

    Science.gov (United States)

    King, Zachary A; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A; Ebrahim, Ali; Palsson, Bernhard O; Lewis, Nathan E

    2016-01-04

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Radar altimetry assimilation in catchment-scale hydrological models

    Science.gov (United States)

    Bauer-Gottwein, P.; Michailovsky, C. I. B.

    2012-04-01

    Satellite-borne radar altimeters provide time series of river and lake levels with global coverage and moderate temporal resolution. Current missions can detect rivers down to a minimum width of about 100m, depending on local conditions around the virtual station. Water level time series from space-borne radar altimeters are an important source of information in ungauged or poorly gauged basins. However, many water resources management applications require information on river discharge. Water levels can be converted into river discharge by means of a rating curve, if sufficient and accurate information on channel geometry, slope and roughness is available. Alternatively, altimetric river levels can be assimilated into catchment-scale hydrological models. The updated models can subsequently be used to produce improved discharge estimates. In this study, a Muskingum routing model for a river network is updated using multiple radar altimetry time series. The routing model is forced with runoff produced by lumped-parameter rainfall-runoff models in each subcatchment. Runoff is uncertain because of errors in the precipitation forcing, structural errors in the rainfall-runoff model as well as uncertain rainfall-runoff model parameters. Altimetric measurements are translated into river reach storage based on river geometry. The Muskingum routing model is forced with a runoff ensemble and storages in the river reaches are updated using a Kalman filter approach. The approach is applied to the Zambezi and Brahmaputra river basins. Assimilation of radar altimetry significantly improves the capability of the models to simulate river discharge.

  11. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hwhite noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the anomaly variance. These scaling hindcasts have comparable - or smaller - RMS errors than existing GCM's. We discuss how these

  12. Reconstructing genome-scale metabolic models with merlin.

    Science.gov (United States)

    Dias, Oscar; Rocha, Miguel; Ferreira, Eugénio C; Rocha, Isabel

    2015-04-30

    The Metabolic Models Reconstruction Using Genome-Scale Information (merlin) tool is a user-friendly Java application that aids the reconstruction of genome-scale metabolic models for any organism that has its genome sequenced. It performs the major steps of the reconstruction process, including the functional genomic annotation of the whole genome and subsequent construction of the portfolio of reactions. Moreover, merlin includes tools for the identification and annotation of genes encoding transport proteins, generating the transport reactions for those carriers. It also performs the compartmentalisation of the model, predicting the organelle localisation of the proteins encoded in the genome and thus the localisation of the metabolites involved in the reactions promoted by such enzymes. The gene-proteins-reactions (GPR) associations are automatically generated and included in the model. Finally, merlin expedites the transition from genomic data to draft metabolic models reconstructions exported in the SBML standard format, allowing the user to have a preliminary view of the biochemical network, which can be manually curated within the environment provided by merlin. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Macro Scale Independently Homogenized Subcells for Modeling Braided Composites

    Science.gov (United States)

    Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.

    2012-01-01

    An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.

  14. Research of Model Scale Seawater Intrusion using Geoelectric Method

    Directory of Open Access Journals (Sweden)

    Supriyadi Supriyadi

    2011-08-01

    Full Text Available A depth experience and knowledge are needed in analyzing the prediction of seawater intrusion. We report here a physical modelling for monitoring the model scale of seawater intrusion. The model used in this research is glass basin consists of two parts; soil and seawater. The intrusion of seawater into soil in the glass basin is modelled. The results of 2-D inversion by using software Res2DInv32 showed that the monitoring of seawater intrusion, in soil model scale, can be detected by using Schlumberger configuration resistivity method. The watering process of freshwater into soil caused the electric resistivity value decreased. This phenomenon can be seen from the transition of the resistivity pseudo section before and after the watering process using different cummulative volume of freshwater in different soil. After being intruded by the seawater, the measured soil resistivity is 2.22 Ωm – 5.69 Ωm which means that the soil had been intruded.

  15. Current state of genome-scale modeling in filamentous fungi.

    Science.gov (United States)

    Brandl, Julian; Andersen, Mikael R

    2015-06-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full capacity. One of the major bottlenecks in the development of new strains into viable industrial hosts is the alteration of the metabolism towards optimal production. Genome-scale models promise a reduction in the time needed for metabolic engineering by predicting the most potent targets in silico before testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi.

  16. Pore-scale modeling of wettability alteration during primary drainage

    Science.gov (United States)

    Kallel, W.; van Dijke, M. I. J.; Sorbie, K. S.; Wood, R.

    2017-03-01

    While carbonate reservoirs are recognized to be weakly-to-moderately oil-wet at the core-scale, pore-scale wettability distributions remain poorly understood. In particular, the wetting state of micropores (pores polar non-hydrocarbon compounds from the oil-phase into the water-phase. We implement a diffusion/adsorption model for these compounds that triggers a wettability alteration from initially water-wet to intermediate-wet conditions. This mechanism is incorporated in a quasi-static pore-network model to which we add a notional time-dependency of the quasi-static invasion percolation mechanism. The model qualitatively reproduces experimental observations where an early rapid wettability alteration involving these small polar species occurred during primary drainage. Interestingly, we could invoke clear differences in the primary drainage patterns by varying both the extent of wettability alteration and the balance between the processes of oil invasion and wetting change. Combined, these parameters dictate the initial water saturation for waterflooding. Indeed, under conditions where oil invasion is slow compared to a fast and relatively strong wetting change, the model results in significant non-zero water saturations. However, for relatively fast oil invasion or small wetting changes, the model allows higher oil saturations at fixed maximum capillary pressures, and invasion of micropores at moderate capillary pressures.

  17. Modeling and Simulation of a lab-scale Fluidised Bed

    Directory of Open Access Journals (Sweden)

    Britt Halvorsen

    2002-04-01

    Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.

  18. Censored rainfall modelling for estimation of fine-scale extremes

    Directory of Open Access Journals (Sweden)

    D. Cross

    2018-01-01

    Full Text Available Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett–Lewis rectangular pulse (BLRP model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett–Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  19. Current state of genome-scale modeling in filamentous fungi

    DEFF Research Database (Denmark)

    Brandl, Julian; Andersen, Mikael Rørdam

    2015-01-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full...... testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique...... metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi....

  20. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  1. Scaling behavior of an airplane-boarding model.

    Science.gov (United States)

    Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard

    2013-04-01

    An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.

  2. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Science.gov (United States)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model

  3. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Energy Technology Data Exchange (ETDEWEB)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.; Fang, Yilin; Mahadevan, Radhakrishnan; Lovley, Derek R.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under

  4. Chronic hyperglycemia affects bone metabolism in adult zebrafish scale model.

    Science.gov (United States)

    Carnovali, Marta; Luzi, Livio; Banfi, Giuseppe; Mariotti, Massimo

    2016-12-01

    Type II diabetes mellitus is a metabolic disease characterized by chronic hyperglycemia that induce other pathologies including diabetic retinopathy and bone disease. The mechanisms implicated in bone alterations induced by type II diabetes mellitus have been debated for years and are not yet clear because there are other factors involved that hide bone mineral density alterations. Despite this, it is well known that chronic hyperglycemia affects bone health causing fragility, mechanical strength reduction and increased propensity of fractures because of impaired bone matrix microstructure and aberrant bone cells function. Adult Danio rerio (zebrafish) represents a powerful model to study glucose and bone metabolism. Then, the aim of this study was to evaluate bone effects of chronic hyperglycemia in a new type II diabetes mellitus zebrafish model created by glucose administration in the water. Fish blood glucose levels have been monitored in time course experiments and basal glycemia was found increased. After 1 month treatment, the morphology of the retinal blood vessels showed abnormalities resembling to the human diabetic retinopathy. The adult bone metabolism has been evaluated in fish using the scales as read-out system. The scales of glucose-treated fish didn't depose new mineralized matrix and shown bone resorption lacunae associated with an intense osteoclast activity. In addition, hyperglycemic fish scales have shown a significant decrease of alkaline phosphatase activity and increase of tartrate-resistant acid phosphatase activity, in association with alterations in other bone-specific markers. These data indicates an imbalance in bone metabolism, which leads to the osteoporotic-like phenotype visualized through scale mineral matrix staining. The zebrafish model of hyperglycemic damage can contribute to elucidate in vivo the molecular mechanisms of metabolic changes, which influence the bone tissues regulation in human diabetic patients.

  5. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  6. Klobuchar-like Ionospheric Model for Different Scales Areas

    Directory of Open Access Journals (Sweden)

    LIU Chen

    2017-05-01

    Full Text Available Nowadays, Klobuchar is the most widely used ionospheric model in the positioning based on single-frequency terminal, and its different refined models have been proposed for a higher and higher accuracy of positioning. The variation of nighttime TEC with local time and the variation of TEC (total electron content with latitude have been analyzed using GIMs. After summarizing the model refinement schemes with wide applications, we proposed a Klobuchar-like model for regions with different scales in this paper. The Klobuchar-like, 14-paramaters Klobuchar and 8-paramaters Klobuchar models were established for the small, large and global regions by GIMs (global ionospheric maps in different solar activity periods and seasons, respectively. Klobuchar-like models, with the correction rates of 92.96%, 91.55% and 72.67% respectively in the small, large and global regions, have higher correction rates than 14-paramaters Klobuchar,8-paramaters Klobuchar and GPS Klobuchar models, which have verified the effectiveness and practicability of Klobuchar-like model.

  7. Electron-scale reduced fluid models with gyroviscous effects

    Science.gov (United States)

    Passot, T.; Sulem, P. L.; Tassi, E.

    2017-08-01

    Reduced fluid models for collisionless plasmas including electron inertia and finite Larmor radius corrections are derived for scales ranging from the ion to the electron gyroradii. Based either on pressure balance or on the incompressibility of the electron fluid, they respectively capture kinetic Alfvén waves (KAWs) or whistler waves (WWs), and can provide suitable tools for reconnection and turbulence studies. Both isothermal regimes and Landau fluid closures permitting anisotropic pressure fluctuations are considered. For small values of the electron beta parameter e$ , a perturbative computation of the gyroviscous force valid at scales comparable to the electron inertial length is performed at order e)$ , which requires second-order contributions in a scale expansion. Comparisons with kinetic theory are performed in the linear regime. The spectrum of transverse magnetic fluctuations for strong and weak turbulence energy cascades is also phenomenologically predicted for both types of waves. In the case of moderate ion to electron temperature ratio, a new regime of KAW turbulence at scales smaller than the electron inertial length is obtained, where the magnetic energy spectrum decays like \\bot -13/3$ , thus faster than the \\bot -11/3$ spectrum of WW turbulence.

  8. The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation

    Directory of Open Access Journals (Sweden)

    M. Wang

    2011-03-01

    Full Text Available Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional general circulation models (GCMs. In this study, we develop a multi-scale aerosol-climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF model that embeds a cloud-resolving model (CRM within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale.

    Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations (within a factor of 2 in most cases, although the model underestimates black carbon concentrations at the surface by a factor of 2–4. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation mode number concentrations in the middle and upper free troposphere by a factor

  9. Analysis and modeling of scale-invariance in plankton abundance

    CERN Document Server

    Pelletier, J D

    1996-01-01

    The power spectrum, $S$, of horizontal transects of plankton abundance are often observed to have a power-law dependence on wavenumber, $k$, with exponent close to $-2$: $S(k)\\propto k^{-2}$ over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum $S(k)\\propto k^{-2}$ is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects $S(k)\\propto k^{-1.8}$, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is $S(f)\\propto f^{-1.5}$ (where $f$ is the frequency...

  10. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  11. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  12. Systems metabolic engineering: Genome-scale models and beyond

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-01-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches – based on the data collected with high throughput technologies – to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems. PMID:20151446

  13. Systems metabolic engineering: genome-scale models and beyond.

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-07-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.

  14. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  15. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  16. A Goddard Multi-Scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2010-01-01

    A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems. In addition, high - resolution (spatial. 2km, and temporal, I minute) visualization showing the model results will be presented.

  17. Lagrangian predictability characteristics of an Ocean Model

    Science.gov (United States)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  18. Relating the CMSSM and SUGRA models with GUT scale and Super-GUT scale Supersymmetry Breaking

    CERN Document Server

    Dudas, Emilian; Mustafayev, Azar; Olive, Keith A.

    2012-01-01

    While the constrained minimal supersymmetric standard model (CMSSM) with universal gaugino masses, $m_{1/2}$, scalar masses, $m_0$, and A-terms, $A_0$, defined at some high energy scale (usually taken to be the GUT scale) is motivated by general features of supergravity models, it does not carry all of the constraints imposed by minimal supergravity (mSUGRA). In particular, the CMSSM does not impose a relation between the trilinear and bilinear soft supersymmetry breaking terms, $B_0 = A_0 - m_0$, nor does it impose the relation between the soft scalar masses and the gravitino mass, $m_0 = m_{3/2}$. As a consequence, $\\tan \\beta$ is computed given values of the other CMSSM input parameters. By considering a Giudice-Masiero (GM) extension to mSUGRA, one can introduce new parameters to the K\\"ahler potential which are associated with the Higgs sector and recover many of the standard CMSSM predictions. However, depending on the value of $A_0$, one may have a gravitino or a neutralino dark matter candidate. We al...

  19. Light moduli in almost no-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas

    2009-09-15

    We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M{sub GUT}, as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L{proportional_to}1/M{sub GUT}. A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m{sub {rho}}

  20. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  1. Next-generation genome-scale models for metabolic engineering.

    Science.gov (United States)

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Improving the spatial resolution of air-quality modelling at a European scale - development and evaluation of the Air Quality Re-gridder Model (AQR v1.1)

    Science.gov (United States)

    Theobald, Mark R.; Simpson, David; Vieno, Massimo

    2016-12-01

    Currently, atmospheric chemistry and transport models (ACTMs) used to assess impacts of air quality, applied at a European scale, lack the spatial resolution necessary to simulate fine-scale spatial variability. This spatial variability is especially important for assessing the impacts to human health or ecosystems of short-lived pollutants, such as nitrogen dioxide (NO2) or ammonia (NH3). In order to simulate this spatial variability, the Air Quality Re-gridder (AQR) model has been developed to estimate the spatial distributions (at a spatial resolution of 1 × 1 km2) of annual mean atmospheric concentrations within the grid squares of an ACTM (in this case with a spatial resolution of 50 × 50 km2). This is done as a post-processing step by combining the coarse-resolution ACTM concentrations with high-spatial-resolution emission data and simple parameterisations of atmospheric dispersion. The AQR model was tested for two European sub-domains (the Netherlands and central Scotland) and evaluated using NO2 and NH3 concentration data from monitoring networks within each domain. A statistical comparison of the performance of the two models shows that AQR gives a substantial improvement on the predictions of the ACTM, reducing both mean model error (from 61 to 41 % for NO2 and from 42 to 27 % for NH3) and increasing the spatial correlation (r) with the measured concentrations (from 0.0 to 0.39 for NO2 and from 0.74 to 0.84 for NH3). This improvement was greatest for monitoring locations close to pollutant sources. Although the model ideally requires high-spatial-resolution emission data, which are not available for the whole of Europe, the use of a Europe-wide emission dataset with a lower spatial resolution also gave an improvement on the ACTM predictions for the two test domains. The AQR model provides an easy-to-use and robust method to estimate sub-grid variability that can potentially be extended to different timescales and pollutants.

  3. Transient Recharge Estimability Through Field-Scale Groundwater Model Calibration.

    Science.gov (United States)

    Knowling, Matthew J; Werner, Adrian D

    2017-11-01

    The estimation of recharge through groundwater model calibration is hampered by the nonuniqueness of recharge and aquifer parameter values. It has been shown recently that the estimability of spatially distributed recharge through calibration of steady-state models for practical situations (i.e., real-world, field-scale aquifer settings) is limited by the need for excessive amounts of hydraulic-parameter and groundwater-level data. However, the extent to which temporal recharge variability can be informed through transient model calibration, which involves larger water-level datasets, but requires the additional consideration of storage parameters, is presently unknown for practical situations. In this study, time-varying recharge estimates, inferred through calibration of a field-scale highly parameterized groundwater model, are systematically investigated subject to changes in (1) the degree to which hydraulic parameters including hydraulic conductivity (K) and specific yield (S y ) are constrained, (2) the number of water-level calibration targets, and (3) the temporal resolution (up to monthly time steps) at which recharge is estimated. The analysis involves the use of a synthetic reality (a reference model) based on a groundwater model of Uley South Basin, South Australia. Identifiability statistics are used to evaluate the ability of recharge and hydraulic parameters to be estimated uniquely. Results show that reasonable estimates of monthly recharge (recharge root-mean-squared error) require a considerable amount of transient water-level data, and that the spatial distribution of K is known. Joint estimation of recharge, S y and K, however, precludes reasonable inference of recharge and hydraulic parameter values. We conclude that the estimation of temporal recharge variability through calibration may be impractical for real-world settings. © 2017, National Ground Water Association.

  4. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  5. Evaluation of a distributed catchment scale water balance model

    Science.gov (United States)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  6. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  7. Modelling biological invasions: Individual to population scales at interfaces

    KAUST Repository

    Belmonte-Beitia, J.

    2013-10-01

    Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.

  8. A Dynamic Pore-Scale Model of Imbibition

    DEFF Research Database (Denmark)

    Mogensen, Kristian; Stenby, Erling Halfdan

    1998-01-01

    We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis of the a......We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis...... of the above-mentioned parameters, except the viscosity ratio. We find that contact angle, aspect ratio and capillary number all have a significant influence on the competition between piston-like advance, leading to high recovery, and snap-off, causing oil entrapment. Due to enormous CPU-time requirements we...... been entirely inhibited, in agreement with results obtained by Blunt using a quasi-static model. For higher aspect ratios, the effect of rate and contact angle is more pronounced. Many core floods are conducted at capillary numbers in the range 10 to10.6. We believe that the excellent recoveries...

  9. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  10. Upscaling hydraulic conductivity from measurement-scale to model-scale

    Science.gov (United States)

    Gunnink, Jan; Stafleu, Jan; Maljers, Densie; Schokker, Jeroen

    2013-04-01

    The Geological Survey of the Netherlands systematically produces both shallow (uncertainty of the model results to be calculated. One of the parameters that is subsequently assigned to the voxels in the GeoTOP model, is hydraulic conductivity (both horizontal and vertical). Hydraulic conductivities are measured on samples taken from high-quality drillings, which are subjected to falling head hydraulic conductivity tests. Samples are taken for all combinations of lithostratigraphy, facies and lithology that are present in the GeoTOP model. The volume of the samples is orders of magnitude smaller than the volume of a voxel in the GeoTOP model. Apart from that, the heterogeneity that occurs within a voxel is not accounted for in the GeoTOP model, since every voxel gets a single lithology that is deemed representative for the entire voxel To account for both the difference in volume and the within-voxel heterogeneity, an upscaling procedure is developed to produce up-scaled hydraulic conductivities for each GeoTOP voxel. A very fine 3D grid of 0.5 x 0.5 x 0.05 m is created that covers the GeoTOP voxel size (100 x 100 x 0.5 m) plus half of the dimensions of the GeoTOP voxel to counteract undesired edge-effects. It is assumed that the scale of the samples is comparable to the voxel size of this fine grid. For each lithostratigraphy and facies combination the spatial correlation structure (variogram) of the lithological classes is used to create 50 equiprobable distributions of lithology for the fine grid with sequential indicator simulation. Then, for each of the lithology realizations, a hydraulic conductivity is assigned to the simulated lithology class, using Sequential Gaussian Simulation, again with the appropriate variogram This results in 50 3D models of hydraulic conductivities on the fine grid. For each of these hydraulic conductivity models, a hydraulic head difference of 1m between top and bottom of the model is used to calculate the flux at the bottom of the

  11. Leptogenesis in GeV-scale seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, P.; Kekic, M. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain); López-Pavón, J. [SISSA and INFN Sezione di Trieste,via Bonomea 265, Trieste, 34136 (Italy); Racker, J.; Rius, N. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain)

    2015-10-09

    We revisit the production of leptonic asymmetries in minimal extensions of the Standard Model that can explain neutrino masses, involving extra singlets with Majorana masses in the GeV scale. We study the quantum kinetic equations both analytically, via a perturbative expansion up to third order in the mixing angles, and numerically. The analytical solution allows us to identify the relevant CP invariants, and simplifies the exploration of the parameter space. We find that sizeable lepton asymmetries are compatible with non-degenerate neutrino masses and measurable active-sterile mixings.

  12. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads......Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...

  13. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    OpenAIRE

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.; Lichota, P.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimen...

  14. Evaluation of two pollutant dispersion models over continental scales

    Science.gov (United States)

    Rodriguez, D.; Walker, H.; Klepikova, N.; Kostrikov, A.; Zhuk, Y.

    Two long-range, emergency response models—one based on the particle-in-cell method of pollutant representation (ADPIC/U.S.) the other based on the superposition of Gaussian puffs released periodically in time (EXPRESS/Russia)—are evaluated using perfluorocarbon tracer data from the Across North America Tracer Experiment (ANATEX). The purpose of the study is to assess our current capabilities for simulating continental-scale dispersion processes and to use these assessments as a means to improve our modeling tools. The criteria for judging model performance are based on protocols devised by the Environmental Protection Agency and on other complementary tests. Most of these measures require the formation and analysis of surface concentration footprints (the surface manifestations of tracer clouds, which are sampled over 24-h intervals), whose dimensions, center-of-mass coordinates and integral characteristics provide a basis for comparing observed and calculated concentration distributions. Generally speaking, the plumes associated with the 20 releases of perfluorocarbon (10 each from sources at Glasgow, MT and St. Cloud, MN) in January 1987, are poorly resolved by the sampling network when the source-to-receptor distances are less than about 1000 km. Within this undersampled region, both models chronically overpredict the sampler concentrations. Given this tendency, the computed areas of the surface footprints and their integral concentrations are likewise excessive. When the actual plumes spread out sufficiently for reasonable resolution, the observed ( O) and calculated ( C) footprint areas are usually within a factor of two of one another, thereby suggesting that the models possess some skill in the prediction of long-range diffusion. Deviations in the O and C plume trajectories, as measured by the distances of separation between the plume centroids, are on the other of 125 km d -1 for both models. It appears that the inability of the models to simulate large-scale

  15. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    Energy Technology Data Exchange (ETDEWEB)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables.

  16. Physics and Dynamics Coupling Across Scales in the Next Generation CESM. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacmeister, Julio T. [University Corporation for Atmospheric Research (UCAR), Boulder, CO (United States)

    2015-06-12

    This project examines physics/dynamics coupling, that is, exchange of meteorological profiles and tendencies between an atmospheric model’s dynamical core and its various physics parameterizations. Most model physics parameterizations seek to represent processes that occur on scales smaller than the smallest scale resolved by the dynamical core. As a consequence a key conceptual aspect of parameterizations is an assumption about the subgrid variability of quantities such as temperature, humidity or vertical wind. Most existing parameterizations of processes such as turbulence, convection, cloud, and gravity wave drag make relatively ad hoc assumptions about this variability and are forced to introduce empirical parameters, i.e., “tuning knobs” to obtain realistic simulations. These knobs make systematic dependences on model grid size difficult to quantify.

  17. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Directory of Open Access Journals (Sweden)

    Kevin J Tsai

    Full Text Available Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  18. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Science.gov (United States)

    Tsai, Kevin J; Chang, Chuan-Hsiung

    2014-01-01

    Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  19. A simple landslide model at a laboratory scale

    Science.gov (United States)

    Atmajati, Elisabeth Dian; Yuliza, Elfi; Habil, Husni; Sadisun, Imam Ahmad; Munir, Muhammad Miftahul; Khairurrijal

    2017-07-01

    Landslide, which is one of the natural disasters that occurs frequently, often causes very adverse effects. Landslide early warning systems, which are installed at prone areas, measure physical parameters closely related to landslides and give warning signals indicating that landslides would occur. To determine the critical values of the measured physical parameters or test the early warning system itself, a laboratory scale model of a rotational landslide was developed. This rotational landslide model had a size of 250×45×40 cm3 and was equipped with soil moisture sensors, accelerometers, and automated measurement system. The soil moisture sensors were used to determine the water content in soil sample. The accelerometers were employed to detect movements in x-, y-, and z-direction. Therefore, the flow and rotational landslides were expected to be modeled and characterized. The developed landslide model could be used to evaluate the effects of slope, soil type, and water seepage on the incidence of landslides. The present experiment showed that the model can show the occurrence of landslides. The presence of water seepage made the slope crack. As the time went by, the crack became bigger. After evaluating the obtained characteristics, the occurred landslide was the flow type. This landslide occurred when the soil sample was in a saturated condition with water. The soil movements in x-, y-, and z-direction were also observed. Further experiments should be performed to realize the rotational landslide.

  20. Modelling of vegetative filter strips in catchment scale erosion control

    Directory of Open Access Journals (Sweden)

    K. RANKINEN

    2008-12-01

    Full Text Available The efficiency of vegetative filter strips to reduce erosion was assessed by simulation modelling in two catchments located in different parts of Finland. The areas of high erosion risk were identified by a Geographical Information System (GIS combining digital spatial data of soil type, land use and field slopes. The efficiency of vegetative filter strips (VFS was assessed by the ICECREAM model, a derivative of the CREAMS model which has been modified and adapted for Finnish conditions. The simulation runs were performed without the filter strips and with strips of 1 m, 3 m and 15 m width. Four soil types and two crops (spring barley, winter wheat were studied. The model assessments for fields without VFS showed that the amount of erosion is clearly dominated by slope gradient. The soil texture had a greater impact on erosion than the crop. The impact of the VFS on erosion reduction was highly variable. These model results were scaled up by combining them to the digital spatial data. The simulated efficiency of the VFS in erosion control in the whole catchment varied from 50 to 89%. A GIS-based erosion risk map of the other study catchment and an identification carried out by manual study using topographical paper maps were evaluated and validated by ground truthing. Both methods were able to identify major erosion risk areas, i.e areas where VFS are particularly necessary. A combination of the GIS and the field method gives the best outcome.

  1. A multi-scale strength model with phase transformation

    Science.gov (United States)

    Barton, N.; Arsenlis, A.; Rhee, M.; Marian, J.; Bernier, J.; Tang, M.; Yang, L.

    2011-06-01

    We present a multi-scale strength model that includes phase transformation. In each phase, strength depends on pressure, strain rate, temperature, and evolving dislocation density descriptors. A donor cell type of approach is used for the transfer of dislocation density between phases. While the shear modulus can be modeled as smooth through the BCC to rhombohedral transformation in vanadium, the multi-phase strength model predicts abrupt changes in the material strength due to changes in dislocation kinetics. In the rhombohedral phase, the dislocation density is decomposed into populations associated with short and long Burgers vectors. Strength model construction employs an information passing paradigm to span from the atomistic level to the continuum level. Simulation methods in the overall hierarchy include density functional theory, molecular statics, molecular dynamics, dislocation dynamics, and continuum based approaches. We demonstrate the behavior of the model through simulations of Rayleigh Taylor instability growth experiments of the type used to assess material strength at high pressure and strain rate. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-464695).

  2. A simple model for wide area hydraulic modelling in data sparse areas.

    Science.gov (United States)

    Neal, J.; Schumann, G.; Bates, P.

    2012-04-01

    The simulation of wave propagation, level and discharge in large rivers systems at continental or global scales, for applications ranging from regional flood risk assessment to climate change impacts, requires computationally efficient hydraulic models that can be applied to locations where limited or no ground based data are available. Many existing global or large scale river routing schemes use kinematic or simpler wave models, which although computationally efficient, are unable to simulate backwatering and floodplain interactions that are key controllers of wave propagation in many large rivers. Diffusive wave models are often suggested as a more physically based alternative, however the lack of inertia in the scheme leads to long simulation times due to the very low slopes in large rivers. We present a Cartesian grid two-dimensional hydraulic model with a parameterised sub-grid scale representation of the 1D channel network that can be build entirely from remotely sensed data. For both channel and floodplain flows the model simulates a simplified shallow water wave (diffusion and inertia) using an explicit finite difference scheme, which was chosen because of its computational efficiency relative to both explicit diffusive and full shallow water wave models. The model was applied to an 800 km reach of the River Niger that includes the complex waterways and lakes of the Niger Inland Delta in Mali. This site has the advantage of having no or low vegetation cover and hence SRTM represents (close to) bare earth floodplain elevations. Floodplain elevation was defined at 1 km resolution from SRTM data to reduce pixel-to-pixel noise, while the widths of main rivers and floodplain channels were estimated from Landsat imagery. The channel bed was defined as a depth from the adjacent floodplain from hydraulic geometry principles using a power law relationship between channel width and depth. This was first approximated from empirical data from a range of other sites

  3. Simplified scaling model for the THETA-pinch

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, K. J.; Thomson, D. B.

    1982-02-01

    A simple ID scaling model for the fast THETA-pinch was developed and written as a code that would be flexible, inexpensive in computer time, and readily available for use with the Los Alamos explosive-driven high-magnetic-field program. The simplified model uses three successive separate stages: (1) a snowplow-like radial implosion, (2) an idealized resistive annihilation of reverse bias field, and (3) an adiabatic compression stage of a BETA = 1 plasma for which ideal pressure balance is assumed to hold. The code uses one adjustable fitting constant whose value was first determined by comparison with results from the Los Alamos Scylla III, Scyllacita, and Scylla IA THETA-pinches.

  4. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  5. Reconstruction of groundwater depletion using a global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; van Beek, Rens; Sutanudjaja, Edwin; Wada, Yoshi; Bierkens, Marc

    2015-04-01

    Groundwater forms an integral part of the global hydrological cycle and is the world's largest accessible source of fresh water to satisfy human water needs. It buffers variable recharge rates over time, thereby effectively sustaining river flows in times of drought as well as evaporation in areas with shallow water tables. Moreover, although lateral groundwater flows are often slow, they cross topographic and administrative boundaries at appreciable rates. Despite the importance of groundwater, most global scale hydrological models do not consider surface water-groundwater interactions or include a lateral groundwater flow component. The main reason of this omission is the lack of consistent global-scale hydrogeological information needed to arrive at a more realistic representation of the groundwater system, i.e. including information on aquifer depths and the presence of confining layers. The latter holds vital information on the accessibility and quality of the global groundwater resource. In this study we developed a high resolution (5 arc-minutes) global scale transient groundwater model comprising confined and unconfined aquifers. This model is based on MODFLOW (McDonald and Harbaugh, 1988) and coupled with the land-surface model PCR GLOBWB (van Beek et al., 2011) via recharge and surface water levels. Aquifers properties were based on newly derived estimates of aquifer depths (de Graaf et al., 2014b) and thickness of confining layers from an integration of lithological and topographical information. They were further parameterized using available global datasets on lithology (Hartmann and Moosdorf, 2011) and permeability (Gleeson et al., 2014). In a sensitivity analysis the model was run with various hydrogeological parameter settings, under natural recharge only. Scenarios of past groundwater abstractions and corresponding recharge (Wada et al., 2012, de Graaf et al. 2014a) were evaluated. The resulting estimates of groundwater depletion are lower than

  6. HD Hydrological modelling at catchment scale using rainfall radar observations

    Science.gov (United States)

    Ciampalini Rossano. Ciampalini@Gmail. Com), Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Augas, Julien; Moussa, Roger; Colin, François; Le Bissonnais, Yves

    2017-04-01

    Hydrological simulations at catchment scale repose on the quality and data availability both for soil and rainfall data. Soil data are quite easy to be collected, although their quality depends on the resources devoted to this task, rainfall data observations, instead, need further effort because of their spatiotemporal variability. Rainfalls are normally recorded with rain gauges located in the catchment, they can provide detailed temporal data, but, the representativeness is limited to the point where the data are collected. Combining different gauges in space can provide a better representation of the rainfall event but the spatialization is often the main obstacle to obtain data close to the reality. Since several years, radar observations overcome this gap providing continuous data registration, that, when properly calibrated, can offer an adequate, continuous, cover in space and time for medium-wide catchments. Here, we use radar records for the south of the France on the La Peyne catchment with the protocol there adopted by the national meteo agency, with resolution of 1 km space and 5' time scale observations. We present here the realisation of a model able to perform from rainfall radar observations, continuous hydrological and soil erosion simulations. The model is semi-theoretically based, once it simulates water fluxes (infiltration-excess overland flow, saturation overland flow, infiltration and channel routing) with a cinematic wave using the St. Venant equation on a simplified "bucket" conceptual model for ground water, and, an empirical representation of sediment load as adopted in models such as STREAM-LANDSOIL (Cerdan et al., 2002, Ciampalini et al., 2012). The advantage of this approach is to furnish a dynamic representation - simulation of the rainfall-runoff events more easily than using spatialized rainfalls from meteo stations and to offer a new look on the spatial component of the events.

  7. Health Literacy Scale and Causal Model of Childhood Overweight.

    Science.gov (United States)

    Intarakamhang, Ungsinun; Intarakamhang, Patrawut

    2017-01-28

    WHO focuses on developing health literacy (HL) referring to cognitive and social skills. Our objectives were to develop a scale for evaluating the HL level of Thai childhood overweight, and develop a path model of health behavior (HB) for preventing obesity. A cross-sectional study. This research used a mixed method. Overall, 2,000 school students were aged 9 to 14 yr collected by stratified random sampling from all parts of Thailand in 2014. Data were analyzed by CFA, LISREL. Reliability of HL and HB scale ranged 0.62 to 0.82 and factor loading ranged 0.33 to 0.80, the subjects had low level of HL (60.0%) and fair level of HB (58.4%), and the path model of HB, could be influenced by HL from three paths. Path 1 started from the health knowledge and understanding that directly influenced the eating behavior (effect sized - β was 0.13, Pliteracy, and making appropriate health-related decision β=0.07, 0.98, and 0.05, respectively. Path 3 the accessing the information and services that influenced communicating for added skills, media literacy, and making appropriate health-related decision β=0.63, 0.93, 0.98, and 0.05. Finally, basic level of HL measured from health knowledge and understanding and accessing the information and services that influenced HB through interactive, and critical level β= 0.76, 0.97, and 0.55, respectively. HL Scale for Thai childhood overweight should be implemented as a screening tool developing HL by the public policy for health promotion.

  8. Multi-scale modelling for HEDP experiments on Orion

    Science.gov (United States)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  9. A small-scale anatomical dosimetry model of the liver

    Science.gov (United States)

    Stenvall, Anna; Larsson, Erik; Strand, Sven-Erik; Jönsson, Bo-Anders

    2014-07-01

    Radionuclide therapy is a growing and promising approach for treating and prolonging the lives of patients with cancer. For therapies where high activities are administered, the liver can become a dose-limiting organ; often with a complex, non-uniform activity distribution and resulting non-uniform absorbed-dose distribution. This paper therefore presents a small-scale dosimetry model for various source-target combinations within the human liver microarchitecture. Using Monte Carlo simulations, Medical Internal Radiation Dose formalism-compatible specific absorbed fractions were calculated for monoenergetic electrons; photons; alpha particles; and 125I, 90Y, 211At, 99mTc, 111In, 177Lu, 131I and 18F. S values and the ratio of local absorbed dose to the whole-organ average absorbed dose was calculated, enabling a transformation of dosimetry calculations from macro- to microstructure level. For heterogeneous activity distributions, for example uptake in Kupffer cells of radionuclides emitting low-energy electrons (125I) or high-LET alpha particles (211At) the target absorbed dose for the part of the space of Disse, closest to the source, was more than eight- and five-fold the average absorbed dose to the liver, respectively. With the increasing interest in radionuclide therapy of the liver, the presented model is an applicable tool for small-scale liver dosimetry in order to study detailed dose-effect relationships in the liver.

  10. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  11. Large-eddy simulation of charged particle flows to model sandstorms

    Science.gov (United States)

    Rahman, Mustafa; Cheng, Wan; Samtaney, Ravi

    2016-11-01

    Intense electric fields and lightning have been observed in sandstorms. It is proposed to investigate the physical mechanisms essential for production and sustenance of large-scale electric fields in sandstorms. Our central hypothesis is that the turbulent transport of charged sand particles is a necessary condition to attain sustained large-scale electric fields in sandstorms. Our investigation relies on simulating turbulent two-phase (air and suspended sand particles) flows in which the flow of air is governed by the filtered Navier-Stokes equations with a subgrid-scale model in a Large-Eddy-Simulation setting, while dust particles are modeled using the Eulerian approach using a version of the Direct Quadrature Method of Moments. For the fluid phase, the LES of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney. We will quantify the effects of different sand particle distributions, and turbulent intensities on the root-mean-square of the generated electric fields. Supported by KAUST OCRF under Award Number URF/1/1704-01-01. The supercomputer Shaheen at KAUST is used for all simulations.

  12. Exploitation of Parallelism in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Baer, F.; Tribbia, J.J.; Williamson, D.L.

    1999-03-01

    The US Department of Energy (DOE), through its CHAMMP initiative, hopes to develop the capability to make meaningful regional climate forecasts on time scales exceeding a decade, such capability to be based on numerical prediction type models. We propose research to contribute to each of the specific items enumerated in the CHAMMP announcement (Notice 91-3); i.e., to consider theoretical limits to prediction of climate and climate change on appropriate time scales, to develop new mathematical techniques to utilize massively parallel processors (MPP), to actually utilize MPPs as a research tool, and to develop improved representations of some processes essential to climate prediction. In particular, our goals are to: (1) Reconfigure the prediction equations such that the time iteration process can be compressed by use of MMP architecture, and to develop appropriate algorithms. (2) Develop local subgrid scale models which can provide time and space dependent parameterization for a state- of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics. (3) Capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. By careful choice of initial states, many realizations of the climate system can be determined concurrently and more realistic assessments of the climate prediction can be made in a realistic time frame. To explore these initiatives, we will exploit all available computing technology, and in particular MPP machines. We anticipate that significant improvements in modeling of climate on the decadal and longer time scales for regional space scales will result from our efforts.

  13. MATHEMATICAL MODELING OF FLOW PARAMETERS FOR SINGLE WIND TURBINE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available It is known that on the territory of the Russian Federation the construction of several large wind farms is planned. The tasks connected with design and efficiency evaluation of wind farm work are in demand today. One of the possible directions in design is connected with mathematical modeling. The method of large eddy simulation developed within the direction of computational hydrodynamics allows to reproduce unsteady structure of the flow in details and to determine various integrated values. The calculation of work for single wind turbine installation by means of large eddy simulation and Actuator Line Method along the turbine blade is given in this work. For problem definition the numerical method in the form of a box was considered and the adapted unstructured grid was used.The mathematical model included the main equations of continuity and momentum equations for incompressible fluid. The large-scale vortex structures were calculated by means of integration of the filtered equations. The calculation was carried out with Smagorinsky model for determination of subgrid scale turbulent viscosity. The geometrical parametersof wind turbine were set proceeding from open sources in the Internet.All physical values were defined at center of computational cell. The approximation of items in equations was ex- ecuted with the second order of accuracy for time and space. The equations for coupling velocity and pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values on each time step was equal to 18. So, the resources of a high performance cluster were required.As a result of flow calculation in wake for the three-bladed turbine average and instantaneous values of velocity, pressure, subgrid kinetic energy and turbulent viscosity, components of subgrid stress tensor were worked out. The re- ceived results matched the known results of experiments and numerical simulation, testify the opportunity

  14. Fine Scale Projections of Indian Monsoonal Rainfall Using Statistical Models

    Science.gov (United States)

    Kulkarni, S.; Ghosh, S.; Rajendran, K.

    2012-12-01

    years of Indian precipitation pattern. The reason behind the failure of bias corrected model in projecting spatially non-uniform precipitation is the inability of the GCMs in modeling finer scale geophysical processes in changed condition. The results highlight the need to revisit the bias correction methods for future projections, to incorporate of finer scale processes.

  15. Application of computer-aided multi-scale modelling framework - Aerosol case study

    DEFF Research Database (Denmark)

    Heitzig, Martina; Gregson, Christopher; Sin, Gürkan

    2011-01-01

    A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets...

  16. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  17. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  18. A hybrid pore-scale and continuum-scale model for solute diffusion, reaction, and biofilm development in porous media

    Science.gov (United States)

    Tang, Youneng; Valocchi, Albert J.; Werth, Charles J.

    2015-03-01

    It is a challenge to upscale solute transport in porous media for multispecies bio-kinetic reactions because of incomplete mixing within the elementary volume and because biofilm growth can change porosity and affect pore-scale flow and diffusion. To address this challenge, we present a hybrid model that couples pore-scale subdomains to continuum-scale subdomains. While the pore-scale subdomains involving significant biofilm growth and reaction are simulated using pore-scale equations, the other subdomains are simulated using continuum-scale equations to save computational time. The pore-scale and continuum-scale subdomains are coupled using a mortar method to ensure continuity of solute concentration and flux at the interfaces. We present results for a simplified two-dimensional system, neglect advection, and use dual Monod kinetics for solute utilization and biofilm growth. The results based on the hybrid model are consistent with the results based on a pore-scale model for three test cases that cover a wide range of Damköhler (Da = reaction rate/diffusion rate) numbers for both homogeneous (spatially periodic) and heterogeneous pore structures. We compare results from the hybrid method with an upscaled continuum model and show that the latter is valid only for cases of small Damköhler numbers, consistent with other results reported in the literature.

  19. Stainless steel corrosion scale formed in reclaimed water: Characteristics, model for scale growth and metal element release.

    Science.gov (United States)

    Cui, Yong; Liu, Shuming; Smith, Kate; Hu, Hongying; Tang, Fusheng; Li, Yuhong; Yu, Kanghua

    2016-10-01

    Stainless steels generally have extremely good corrosion resistance, but are still susceptible to pitting corrosion. As a result, corrosion scales can form on the surface of stainless steel after extended exposure to aggressive aqueous environments. Corrosion scales play an important role in affecting water quality. These research results showed that interior regions of stainless steel corrosion scales have a high percentage of chromium phases. We reveal the morphology, micro-structure and physicochemical characteristics of stainless steel corrosion scales. Stainless steel corrosion scale is identified as a podiform chromite deposit according to these characteristics, which is unlike deposit formed during iron corrosion. A conceptual model to explain the formation and growth of stainless steel corrosion scale is proposed based on its composition and structure. The scale growth process involves pitting corrosion on the stainless steel surface and the consecutive generation and homogeneous deposition of corrosion products, which is governed by a series of chemical and electrochemical reactions. This model shows the role of corrosion scales in the mechanism of iron and chromium release from pitting corroded stainless steel materials. The formation of corrosion scale is strongly related to water quality parameters. The presence of HClO results in higher ferric content inside the scales. Cl- and SO42- ions in reclaimed water play an important role in corrosion pitting of stainless steel and promote the formation of scales. Copyright © 2016. Published by Elsevier B.V.

  20. Coupling of a hydrologic model with an atmospheric model at the mesoscale. Final report; Verbindung von hydrologischem, gitterpunktgestuetztem Modell und mesoskaligem Atmosphaerenmodell. Abschlussbericht fuer den Zeitraum vom 1.4.1994-31.12.1997

    Energy Technology Data Exchange (ETDEWEB)

    Raabe, A.; Moelders, N.; Klingspohn, M.; Simmel, M.

    1998-12-31

    A method to couple a meteorological and a hydrologic model was developed to describe the water cycle in a closed manner. Up to now mesoscale atmospheric models only considered the transport of water at the land surface without the linking to hydrologic runoff models, i.e., the water cycle was mostly unclosed at the land surface. In our case, the two models interact by a mass balance of water. The hydrologic model provides the fields of runoff availability for each part of the surface, which is influenced by the lateral movement of water and the meteorological model uses differences of this data in its soil wetness equation to predict the evapotranspiration. The coupling uses a resolution of the land surface of 1 km{sup 2}. The meteorological model reaches this resolution by use of an explicit subgrid in the atmospheric surface layer, while elsewhere a coarser grid is used to predict the water cycle relevant quantities. This subgrid scheme allows to produce subgrid-scale evapotranspiration in more detail and to heterogenize precipitation. Sensitivity studies show the influence of the coupling to the response of the water cycle relevant quantities when various types of the landuse data, soil moisture fields and cloud parametrization schemes are used. The explicit subgrid scheme can be used to link hydrologic models to other mesoscale atmospheric models, which is demonstrated for the model of the German Weather Service (Deutschland-Modell (DM)). (orig.) [Deutsch] Ein Verfahren zur Kopplung eines meteorologischen mit einem hydrologischen Modell wurde entwickelt um einen Ausschnitt des Wasserkreislaufes in geschlossener Form zu beschreiben. Bisher beruecksichtigten mesoskalige Atmosphaerenmodelle Wassertransporte an der Bodenoberflaeche ohne die Anbindung hydrologischer Flussgebietsmodelle, der Wasserkreislauf war an der Bodenoberflaeche meist nicht geschlossen. Hier treten die beiden Modelle ueber eine Wassermassenbilanz in Wechselwirkung, indem das hydrologische

  1. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  2. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  3. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    Science.gov (United States)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  4. Derivation of a GIS-based watershed-scale conceptual model for the St. Jones River Delaware from habitat-scale conceptual models.

    Science.gov (United States)

    Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub

    2009-08-01

    Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.

  5. Modelling catchment non-stationarity - multi-scale modelling and data assimilation

    Science.gov (United States)

    Wheater, H. S.; Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.

    2012-12-01

    Modelling environmental change is in many senses a 'Grand Challenge' for hydrology, but poses major methodological challenges for hydrological models. Conceptual models represent complex processes in a simplified and spatially aggregated manner; typically parameters have no direct relationship to measurable physical properties. Calibration using observed data results in parameter equifinality, unless highly parsimonious model structures are employed. Use of such models to simulate effects of catchment non-stationarity is essentially speculative, unless attention is given to the analysis of parameter temporal variability in a non-stationary observation record. Black-box models are similarly constrained by the information content of the observational data. In contrast, distributed physics-based models provide a stronger theoretical basis for the prediction of change. However, while such models have parameters that are in principle measurable, in practice, for catchment-scale application, the measurement scale is inconsistent with the scale of model representation, the costs associated with such an exercise are high, and key properties are spatially variable, often strongly non-linear, and highly uncertain. In this paper we present a framework for modelling catchment non-stationarity that integrates information (with uncertainty) from multiple models and data sources. The context is the need to model the effects of agricultural land use change at multiple scales. A detailed UK multi-scale and multi-site experimental programme has provided data to support high resolution physics-based models of runoff processes that can, for example, represent the effects of soil structural change (due to grazing densities or trafficking), localised tree planting and drainage. Such models necessarily have high spatial resolution (1m in the horizontal plane, 1 cm in the vertical in this case), and hence can be applied at the scale of a field or hillslope element, but would be

  6. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  7. Simulation of Acoustics for Ares I Scale Model Acoustic Tests

    Science.gov (United States)

    Putnam, Gabriel; Strutzenberg, Louise L.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  8. URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES

    Science.gov (United States)

    Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....

  9. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties ...

  10. Macro and micro-scale modeling of polyurethane foaming processes

    Science.gov (United States)

    Geier, S.; Piesche, M.

    2014-05-01

    Mold filling processes of refrigerators, car dashboards or steering wheels are some of the many application areas of polyurethane foams. The design of these processes still mainly relies on empirical approaches. Therefore, we first developed a modeling approach describing mold filling processes in complex geometries. Hence, it is possible to study macroscopic foam flow and to identify voids. The final properties of polyurethane foams may vary significantly depending on the location within a product. Additionally, the local foam structure influences foam properties like thermal conductivity or impact strength significantly. It is neither possible nor would it be efficient to model complex geometries completely on bubble scale. For this reason, we developed a modeling approach describing the bubble growth and the evolution of the foam structure for a limited number of bubbles in a representative volume. Finally, we coupled our two simulation approaches by introducing tracer particles into our mold filling simulations. Through this coupling, a basis for studying the evolution of the local foam structure in complex geometries is provided.

  11. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  12. Scaling exponents in space plasmas: a fractional Levy model

    Science.gov (United States)

    Watkins, N. W.; Credgington, D.; Hnat, B.; Chapman, S. C.; Freeman, M. P.; Greenhough, J.

    Mandelbrot introduced the concept of fractals to describe the non-Euclidean shape of many aspects of the natural world In the time series context he proposed the use of fractional Brownian motion fBm to model non-negligible temporal persistence the Joseph Effect and Levy flights to quantify large discontinuities the Noah Effect In space physics the effects are manifested as intermittency and long-range correlation well-established features of geomagnetic indices and their solar wind drivers In order to capture and quantify the Noah and Joseph effects in one compact model we propose the application of a bridge -fractional Levy motion fLm -to space physics We perform an initial evaluation of some previous scaling results in this paradigm and show how fLm can model the previously observed exponents physics 0509058 in press Space Science Reviews We discuss the similarities and differences between fLm and ambivalent processes based on fractional kinetic equations e g Brockmann et al Nature 2006 and suggest some new directions for the future

  13. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  14. A Plume Scale Model of Chlorinated Ethene Degradation

    DEFF Research Database (Denmark)

    Murray, Alexandra Marie; Broholm, Mette Martina; Badin, Alice

    Although much is known about the biotic degradation pathways of chlorinated solvents, application of the degradation mechanism at the field scale is still challenging [1]. There are many microbial kinetic models to describe the reductive dechlorination in soil and groundwater, however none of them...... leaked from a dry cleaning facility, and a 2 km plume extends from the source in an unconfined aquifer of homogenous fluvio-glacial sand. The area has significant iron deposits, most notably pyrite, which can abiotically degrade chlorinated ethenes. The source zone underwent thermal (steam) remediation...... in 2006; the plume has received no treatment. The evolution of the site has been intensely documented since before the source treatment. This includes microbial analysis – Dehalococcoides sp. and vcrA genes have been identified and quantified by qPCR – and dual carbon-chlorine isotope analysis [1...

  15. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  16. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    Science.gov (United States)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  17. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  18. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.; Kim, Taeyun

    2010-11-30

    The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, an intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.

  19. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  20. Land surface evapotranspiration modelling at the regional scale

    Science.gov (United States)

    Raffelli, Giulia; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Gisolo, Davide; Provenzale, Antonello

    2017-04-01

    Climate change has relevant implications for the environment, water resources and human life in general. The observed increment of mean air temperature, in addition to a more frequent occurrence of extreme events such as droughts, may have a severe effect on the hydrological cycle. Besides climate change, land use changes are assumed to be another relevant component of global change in terms of impacts on terrestrial ecosystems: socio-economic changes have led to conversions between meadows and pastures and in most cases to a complete abandonment of grasslands. Water is subject to different physical processes among which evapotranspiration (ET) is one of the most significant. In fact, ET plays a key role in estimating crop growth, water demand and irrigation water management, so estimating values of ET can be crucial for water resource planning, irrigation requirement and agricultural production. Potential evapotranspiration (PET) is the amount of evaporation that occurs when a sufficient water source is available. It can be estimated just knowing temperatures (mean, maximum and minimum) and solar radiation. Actual evapotranspiration (AET) is instead the real quantity of water which is consumed by soil and vegetation; it is obtained as a fraction of PET. The aim of this work was to apply a simplified hydrological model to calculate AET for the province of Turin (Italy) in order to assess the water content and estimate the groundwater recharge at a regional scale. The soil is seen as a bucket (FAO56 model, Allen et al., 1998) made of different layers, which interact with water and vegetation. The water balance is given by precipitations (both rain and snow) and dew as positive inputs, while AET, runoff and drainage represent the rate of water escaping from soil. The difference between inputs and outputs is the water stock. Model data inputs are: soil characteristics (percentage of clay, silt, sand, rocks and organic matter); soil depth; the wilting point (i.e. the

  1. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    Science.gov (United States)

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  2. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    Science.gov (United States)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  3. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  4. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  5. Modeling coastal upwelling around a small-scale coastline promontory

    Science.gov (United States)

    Haas, K. A.; Cai, D.; Freismuth, T. M.; MacMahan, J.; Di Lorenzo, E.; Suanda, S. H.; Kumar, N.; Miller, A. J.; Edwards, C. A.

    2016-12-01

    On the US west coast, northerly winds drive coastal ocean upwelling, an important process which brings cold nutrient rich water to the nearshore. The coastline geometry has been shown to be a significant factor in the strength of the upwelling process. In particular, the upwelling in the lee of major headlands have been shown to be enhanced. Recent observations from the Pt. Sal region on the coast of southern California have shown the presence of cooler water south of a small (350 m) rocky promontory (Mussel Pt.) during upwelling events. The hypothesis is that the small scale promontory is creating a lee side enhancement to the upwelling. To shed some light on this process, numerical simulations of the inner shelf region centered about Pt. Sal are conducted with the ROMS module of the COAWST model system. The model system is configured with four nested grids with resolutions ranging from approximately 600 m to the outer shelf ( 200 m) to the inner shelf ( 66 m) and finally to the surf zone ( 22 m). A solution from a 1 km grid encompassing our domain provides the boundary conditions for the 600 m grid. Barotropic tidal forcing is incorporated at the 600 m grid to provide tidal variability. This model system with realistic topography and bathymetry, winds and tides, is able to isolate the forcing mechanisms that explain the emergence of the cold water mass. The simulations focus on the time period of June - July, 2015 corresponding to the pilot study in which observational experiment data was collected. The experiment data in part consists of in situ measurement, which includes mooring with conductivity, temperature, depth, and flow velocity. The model simulations are able to reproduce the important flow features including the cooler water mass south of Mussel Pt. As hypothesized, the strength of the upwelling is enhanced on the side of Mussel Pt. In addition, periods of wind relaxation where the upwelling ceases and even begins to transform towards downwelling is

  6. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-01-04

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.

  7. Upscaling of U(VI) Desorption and Transport from Decimeter-Scale Heterogeneity to Plume-Scale Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Gary P. [U.S. Geological Survey, Menlo Park, CA (United States); Kohler, Matthias [U.S. Geological Survey, Menlo Park, CA (United States); Kannappan, Ramakrishnan [U.S. Geological Survey, Menlo Park, CA (United States); Briggs, Martin [U.S. Geological Survey, Menlo Park, CA (United States); Day-Lewis, Fred [U.S. Geological Survey, Menlo Park, CA (United States)

    2015-02-24

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  8. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  9. On a class of scaling FRW cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Cataldo, Mauricio [Departamento de Física, Universidad del Bío-Bío, Avenida Collao 1202, Casilla 5-C, Concepción (Chile); Arevalo, Fabiola; Minning, Paul, E-mail: mcataldo@ubiobio.cl, E-mail: pminning@udec.cl, E-mail: farevalo@udec.cl [Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción (Chile)

    2010-02-01

    We study Friedmann-Robertson-Walker cosmological models with matter content composed of two perfect fluids ρ{sub 1} and ρ{sub 2}, with barotropic pressure densities p{sub 1}/ρ{sub 1} = ω{sub 1} = const and p{sub 2}/ρ{sub 2} = ω{sub 2} = const, where one of the energy densities is given by ρ{sub 1} = C{sub 1}a{sup α}+C{sub 2}a{sup β}, with C{sub 1}, C{sub 2}, α and β taking constant values. We solve the field equations by using the conservation equation without breaking it into two interacting parts with the help of a coupling interacting term Q. Nevertheless, with the found solution may be associated an interacting term Q, and then a number of cosmological interacting models studied in the literature correspond to particular cases of our cosmological model. Specifically those models having constant coupling parameters α-tilde , β-tilde and interacting terms given by Q = α-tilde Hρ{sub D{sub M}}, Q = α-tilde Hρ{sub D{sub E}}, Q = α-tilde H(ρ{sub D{sub M}}+ρ{sub D{sub E}}) and Q = α-tilde Hρ{sub D{sub M}}+β-tilde Hρ{sub D{sub E}}, where ρ{sub D{sub M}} and ρ{sub D{sub E}} are the energy densities of dark matter and dark energy respectively. The studied set of solutions contains a class of cosmological models presenting a scaling behavior at early and at late times. On the other hand the two-fluid cosmological models considered in this paper also permit a three fluid interpretation which is also discussed. In this reinterpretation, for flat Friedmann-Robertson-Walker cosmologies, the requirement of positivity of energy densities of the dark matter and dark energy components allows the state parameter of dark energy to be in the range −1.37∼<ω{sub D{sub E}} < −1/3.

  10. SMR Re-Scaling and Modeling for Load Following Studies

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  11. Meso-scale modeling of irradiated concrete in test reactor

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, A. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Vaitová, M. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic); Le Pape, Y., E-mail: lepapeym@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Štemberk, P. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic)

    2015-12-15

    Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.

  12. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  13. The space-scale cube : An integrated model for 2D polygonal areas and scale

    NARCIS (Netherlands)

    Meijers, B.M.; Van Oosterom, P.J.M.

    2011-01-01

    This paper introduces the concept of a space-scale partition, which we term the space-scale cube – analogous with the space-time cube (first introduced by Hägerstrand, 1970). We take the view of ‘map generalization is extrusion of 2D data into the third dimension’ (as introduced by Vermeij et al.,

  14. Pretest Round Robin Analysis of 1:4-Scale Prestressed Concrete Containment Vessel Model

    Energy Technology Data Exchange (ETDEWEB)

    HESSHEIMER,MICHAEL F.; LUK,VINCENT K.; KLAMERUS,ERIC W.; SHIBATA,S.; MITSUGI,S.; COSTELLO,J.F.

    2000-12-18

    The purpose of the program is to investigate the response of representative scale models of nuclear containment to pressure loading beyond the design basis accident and to compare analytical predictions to measured behavior. This objective is accomplished by conducting static, pneumatic overpressurization tests of scale models at ambient temperature. This research program consists of testing two scale models: a steel containment vessel (SCV) model (tested in 1996) and a prestressed concrete containment vessel (PCCV) model, which is the subject of this paper.

  15. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    Science.gov (United States)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller

  16. Model-Scale Experiment of the Seakeeping Performance for R/V Melville, Model 5720

    Science.gov (United States)

    2012-07-01

    fiberglass with stainless steel bilge keels. A summary of model particulars, in full and model scale, is provided in Table 1. The hull geometry was...foam. The bilge keels were constructed of stainless steel and fit to match the bilge keel trace from the ship drawings (Figure 6). A weight post...Measuring Devices,” NIST Handbook 44, Tina Butcher, Steve Cook, Linda Crown , and Rick Harshman (Editors), National Institute of Standards and

  17. Forest processes from stands to landscapes: exploring model forecast uncertainties using cross-scale model comparison

    Science.gov (United States)

    Michael J. Papaik; Andrew Fall; Brian Sturtevant; Daniel Kneeshaw; Christian Messier; Marie-Josee Fortin; Neal. Simon

    2010-01-01

    Forest management practices conducted primarily at the stand scale result in simplified forests with regeneration problems and low structural and biological diversity. Landscape models have been used to help design management strategies to address these problems. However, there remains a great deal of uncertainty that the actual management practices result in the...

  18. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  19. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  20. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  1. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  2. Common problematic aspects of coupling hydrological models with groundwater flow models on the river catchment scale

    Directory of Open Access Journals (Sweden)

    R. Barthel

    2006-01-01

    Full Text Available Model coupling requires a thorough conceptualisation of the coupling strategy, including an exact definition of the individual model domains, the "transboundary" processes and the exchange parameters. It is shown here that in the case of coupling groundwater flow and hydrological models – in particular on the regional scale – it is very important to find a common definition and scale-appropriate process description of groundwater recharge and baseflow (or "groundwater runoff/discharge" in order to achieve a meaningful representation of the processes that link the unsaturated and saturated zones and the river network. As such, integration by means of coupling established disciplinary models is problematic given that in such models, processes are defined from a purpose-oriented, disciplinary perspective and are therefore not necessarily consistent with definitions of the same process in the model concepts of other disciplines. This article contains a general introduction to the requirements and challenges of model coupling in Integrated Water Resources Management including a definition of the most relevant technical terms, a short description of the commonly used approach of model coupling and finally a detailed consideration of the role of groundwater recharge and baseflow in coupling groundwater models with hydrological models. The conclusions summarize the most relevant problems rather than giving practical solutions. This paper aims to point out that working on a large scale in an integrated context requires rethinking traditional disciplinary workflows and encouraging communication between the different disciplines involved. It is worth noting that the aspects discussed here are mainly viewed from a groundwater perspective, which reflects the author's background.

  3. Scour around Support Structures of Scaled Model Marine Hydrokinetic Devices

    Science.gov (United States)

    Volpe, M. A.; Beninati, M. L.; Krane, M.; Fontaine, A.

    2013-12-01

    Experiments are presented to explore scour due to flows around support structures of marine hydrokinetic (MHK) devices. Three related studies were performed to understand how submergence, scour condition, and the presence of an MHK device impact scour around the support structure (cylinder). The first study focuses on clear-water scour conditions for a cylinder of varying submergence: surface-piercing and fully submerged. The second study centers on three separate scour conditions (clear-water, transitional and live-bed) around the fully submerged cylinder. Lastly, the third study emphasizes the impact of an MHK turbine on scour around the support structure, in live-bed conditions. Small-scale laboratory testing of model devices can be used to help predict the behavior of MHK devices at full-scale. Extensive studies have been performed on single cylinders, modeling bridge piers, though few have focused on fully submerged structures. Many of the devices being used to harness marine hydrokinetic energy are fully submerged in the flow. Additionally, scour hole dimensions and scour rates have not been addressed. Thus, these three studies address the effect of structure blockage/drag, and the ambient scour conditions on scour around the support structure. The experiments were performed in the small-scale testing platform in the hydraulic flume facility (9.8 m long, 1.2 m wide and 0.4 m deep) at Bucknell University. The support structure diameter (D = 2.54 cm) was held constant for all tests. The submerged cylinder (l/D = 5) and sediment size (d50 = 790 microns) were held constant for all three studies. The MHK device (Dturbine = 10.2 cm) is a two-bladed horizontal axis turbine and the rotating shaft is friction-loaded using a metal brush motor. For each study, bed form topology was measured after a three-hour time interval using a traversing two-dimensional bed profiler. During the experiments, scour hole depth measurements at the front face of the support structure

  4. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Science.gov (United States)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  5. NASA Standard for Models and Simulations: Credibility Assessment Scale

    Science.gov (United States)

    Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Mosier, Gary E.; Steele, Martin J.; Woods, Jody

    2009-01-01

    As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M&S). Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version, hereafter referred to as the M&S Standard or just the Standard, occurred in July 2008. Bertch, Zang and Steeleiv provided a summary review of the development process of this standard up through the start of the review by the EMB. A thorough recount of the entire development process, major issues, key decisions, and all review processes are available in Ref. v. This is the second of a pair of papers providing a summary of the final version of the Standard. Its focus is the Credibility Assessment Scale, a key feature of the Standard, including an example of its application to a real-world M&S problem for the James Webb Space Telescope. The companion paper summarizes the overall philosophy of the Standard and an overview of the requirements. Verbatim quotes from the Standard are integrated into the text of this paper, and are indicated by quotation marks.

  6. Implementation of meso-scale radioactive dispersion model for GPU

    Energy Technology Data Exchange (ETDEWEB)

    Sunarko [National Nuclear Energy Agency of Indonesia (BATAN), Jakarta (Indonesia). Nuclear Energy Assessment Center; Suud, Zaki [Bandung Institute of Technology (ITB), Bandung (Indonesia). Physics Dept.

    2017-05-15

    Lagrangian Particle Dispersion Method (LPDM) is applied to model atmospheric dispersion of radioactive material in a meso-scale of a few tens of kilometers for site study purpose. Empirical relationships are used to determine the dispersion coefficient for various atmospheric stabilities. Diagnostic 3-D wind-field is solved based on data from one meteorological station using mass-conservation principle. Particles representing radioactive pollutant are dispersed in the wind-field as a point source. Time-integrated air concentration is calculated using kernel density estimator (KDE) in the lowest layer of the atmosphere. Parallel code is developed for GTX-660Ti GPU with a total of 1 344 scalar processors using CUDA. A test of 1-hour release discovers that linear speedup is achieved starting at 28 800 particles-per-hour (pph) up to about 20 x at 14 4000 pph. Another test simulating 6-hour release with 36 000 pph resulted in a speedup of about 60 x. Statistical analysis reveals that resulting grid doses are nearly identical in both CPU and GPU versions of the code.

  7. Overview of the Ares I Scale Model Acoustic Test Program

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janice D.

    2011-01-01

    Launch environments, such as lift-off acoustic (LOA) and ignition overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. LOA environments are used directly in the development of vehicle vibro-acoustic environments and IOP is used in the loads assessment. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe for component survivability, reduction of the environment itself is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the Ares I LOA and IOP environments for the vehicle and ground systems including the Mobile Launcher (ML) and tower. An additional objective was to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. ASMAT was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116 (TS 116). The ASMAT program is described in this presentation.

  8. Small scale modelling of dynamic impact of debris flows

    Science.gov (United States)

    Sanvitale, Nicoletta; Bowman, Elisabeth

    2017-04-01

    Fast landslides, such as debris flows, involve high speed downslope motion of rocks, soil and water. Engineering attempts to reduce the risk posed by these natural hazards often involve the placement of barriers or obstacles to inhibit movement. The impact pressures exert by debris flows are difficult to estimate because they not only depend on the geometry and size of the flow and the obstacle but also on the characteristics of the flow mixture. The presence of a solid phase can increase local impact pressure due to hard contact often caused by single boulder. This can lead to higher impact forces compared to the estimates of the peak pressure value obtained from hydraulic based models commonly adopted in such analyses. The proposed study aims at bringing new insight to the impact loading of structures generated by segregating granular debris flow. A small-scale flume, designed to enable plane laser induced fluorescence (PLIF) and digital image correlation (DIC) to be applied internally will be used for 2D analyses. The flow will incorporate glass particles suitable for refractive index matching (RIM) with a matched fluid to gain optical access to the internal behaviour of the flow, via a laser sheet applied away from sidewall boundaries. For these tests, the focus will be on assessing 2D particle interactions in unsteady flow. The paper will present in details the methodology and the set-up of the experiments together with some preliminary results

  9. Scale-adaptive surface modeling of vascular structures

    Directory of Open Access Journals (Sweden)

    Ma Xin

    2010-11-01

    Full Text Available Abstract Background The effective geometric modeling of vascular structures is crucial for diagnosis, therapy planning and medical education. These applications require good balance with respect to surface smoothness, surface accuracy, triangle quality and surface size. Methods Our method first extracts the vascular boundary voxels from the segmentation result, and utilizes these voxels to build a three-dimensional (3D point cloud whose normal vectors are estimated via covariance analysis. Then a 3D implicit indicator function is computed from the oriented 3D point cloud by solving a Poisson equation. Finally the vessel surface is generated by a proposed adaptive polygonization algorithm for explicit 3D visualization. Results Experiments carried out on several typical vascular structures demonstrate that the presented method yields both a smooth morphologically correct and a topologically preserved two-manifold surface, which is scale-adaptive to the local curvature of the surface. Furthermore, the presented method produces fewer and better-shaped triangles with satisfactory surface quality and accuracy. Conclusions Compared to other state-of-the-art approaches, our method reaches good balance in terms of smoothness, accuracy, triangle quality and surface size. The vessel surfaces produced by our method are suitable for applications such as computational fluid dynamics simulations and real-time virtual interventional surgery.

  10. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  11. Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs

    Science.gov (United States)

    Hung, David; Lee, Shu-Shing

    2015-01-01

    Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…

  12. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  13. Large-scale secondary circulations in the regional climate model COSMO-CLM

    OpenAIRE

    Becker, Nico

    2016-01-01

    Regional climate models (RCMs) are used to add smaller scales to coarser resolved driving data, e. g. from global climate models (GCMs), by using a higher resolution on a limited domain. However, RCMs do not only add scales which are not resolved by the driving model but also deviate from the driving data on larger scales. Thus, RCMs are able to improve the large scales prescribed by the driving data. However, large scale deviations can also lead to instabilities at the model boundaries. A sy...

  14. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    Science.gov (United States)

    O'Malley, C.; White, N. J.

    2016-12-01

    constants can be shown to produce reliable uplift histories. However, these erosional constants appear to vary from continent to continent. Future work will investigate the global relationship between our inversion results, scaling laws, climate models, lithological variation and sedimentary flux.

  15. A numerical model for dynamic crustal-scale fluid flow

    Science.gov (United States)

    Sachau, Till; Bons, Paul; Gomez-Rivas, Enrique; Koehn, Daniel

    2015-04-01

    Fluid flow in the crust is often envisaged and modeled as continuous, yet minimal flow, which occurs over large geological times. This is a suitable approximation for flow as long as it is solely controlled by the matrix permeability of rocks, which in turn is controlled by viscous compaction of the pore space. However, strong evidence (hydrothermal veins and ore deposits) exists that a significant part of fluid flow in the crust occurs strongly localized in both space and time, controlled by the opening and sealing of hydrofractures. We developed, tested and applied a novel computer code, which considers this dynamic behavior and couples it with steady, Darcian flow controlled by the matrix permeability. In this dual-porosity model, fractures open depending on the fluid pressure relative to the solid pressure. Fractures form when matrix permeability is insufficient to accommodate fluid flow resulting from compaction, decompression (Staude et al. 2009) or metamorphic dehydration reactions (Weisheit et al. 2013). Open fractures can close when the contained fluid either seeps into the matrix or escapes by fracture propagation: mobile hydrofractures (Bons, 2001). In the model, closing and sealing of fractures is controlled by a time-dependent viscous law, which is based on the effective stress and on either Newtonian or non-Newtonian viscosity. Our simulations indicate that the bulk of crustal fluid flow in the middle to lower upper crust is intermittent, highly self-organized, and occurs as mobile hydrofractures. This is due to the low matrix porosity and permeability, combined with a low matrix viscosity and, hence, fast sealing of fractures. Stable fracture networks, generated by fluid overpressure, are restricted to the uppermost crust. Semi-stable fracture networks can develop in an intermediate zone, if a critical overpressure is reached. Flow rates in mobile hydrofractures exceed those in the matrix porosity and fracture networks by orders of magnitude

  16. Mokken Scale Analysis for Dichotomous Items Using Marginal Models

    Science.gov (United States)

    van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas

    2008-01-01

    Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…

  17. Strategies for Measuring Wind Erosion for Regional Scale Modeling

    NARCIS (Netherlands)

    Youssef, F.; Visser, S.; Karssenberg, D.J.; Slingerland, E.; Erpul, G.; Ziadat, F.; Stroosnijder, L. Prof.dr.ir.

    2012-01-01

    Windblown sediment transport is mostly measured at field or plot scale due to the high spatial variability over the study area. Regional scale measurements are often limited to measurements of the change in the elevation providing information on net erosion or deposition. For the calibration and

  18. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  19. Drift-Scale Coupled Processes (DST and TH Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    J. Birkholzer; S. Mukhopadhyay

    2004-09-29

    The purpose of this report is to document drift-scale modeling work performed to evaluate the thermal-hydrological (TH) behavior in Yucca Mountain fractured rock close to waste emplacement drifts. The heat generated by the decay of radioactive waste results in rock temperatures elevated from ambient for thousands of years after emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, giving rise to water redistribution and altered flow paths. The predictive simulations described in this report are intended to investigate fluid flow in the vicinity of an emplacement drift for a range of thermal loads. Understanding the TH coupled processes is important for the performance of the repository because the thermally driven water saturation changes affect the potential seepage of water into waste emplacement drifts. Seepage of water is important because if enough water gets into the emplacement drifts and comes into contact with any exposed radionuclides, it may then be possible for the radionuclides to be transported out of the drifts and to the groundwater below the drifts. For above-boiling rock temperatures, vaporization of percolating water in the fractured rock overlying the repository can provide an important barrier capability that greatly reduces (and possibly eliminates) the potential of water seeping into the emplacement drifts. In addition to this thermal process, water is inhibited from entering the drift opening by capillary forces, which occur under both ambient and thermal conditions (capillary barrier). The combined barrier capability of vaporization processes and capillary forces in the near-field rock during the thermal period of the repository is analyzed and discussed in this report.

  20. Scaling up from field to region for wind erosion prediction using a field-scale wind erosion model and GIS

    Science.gov (United States)

    Zobeck, T.M.; Parker, N.C.; Haskell, S.; Guoding, K.

    2000-01-01

    Factors that affect wind erosion such as surface vegetative and other cover, soil properties and surface roughness usually change spatially and temporally at the field-scale to produce important field-scale variations in wind erosion. Accurate estimation of wind erosion when scaling up from fields to regions, while maintaining meaningful field-scale process details, remains a challenge. The objectives of this study were to evaluate the feasibility of using a field-scale wind erosion model with a geographic information system (GIS) to scale up to regional levels and to quantify the differences in wind erosion estimates produced by different scales of soil mapping used as a data layer in the model. A GIS was used in combination with the revised wind erosion equation (RWEQ), a field-scale wind erosion model, to estimate wind erosion for two 50 km2 areas. Landsat Thematic Mapper satellite imagery from 1993 with 30 m resolution was used as a base map. The GIS database layers included land use, soils, and other features such as roads. The major land use was agricultural fields. Data on 1993 crop management for selected fields of each crop type were collected from local government agency offices and used to 'train' the computer to classify land areas by crop and type of irrigation (agroecosystem) using commercially available software. The land area of the agricultural land uses was overestimated by 6.5% in one region (Lubbock County, TX, USA) and underestimated by about 21% in an adjacent region (Terry County, TX, USA). The total estimated wind erosion potential for Terry County was about four times that estimated for adjacent Lubbock County. The difference in potential erosion among the counties was attributed to regional differences in surface soil texture. In a comparison of different soil map scales in Terry County, the generalised soil map had over 20% more of the land area and over 15% greater erosion potential in loamy sand soils than did the detailed soil map. As

  1. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    Science.gov (United States)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that

  2. Confined swirling jet predictions using a multiple-scale turbulence model

    Science.gov (United States)

    Chen, C. P.

    1985-01-01

    A recently developed multiple scale turbulence model is used for the numerical prediction of isothermal, confined turbulent swirling flows. Because of the streamline curvature and nonequilibrium spectral energy transfer nature of the swirling flow, the utilized multiple scale turbulence model includes a different set of response equations for each of the large scale energetic eddies and the small scale transfer eddies. Predictions are made of a confined coaxial swirling jet in a sudden expansion and comparisons are made with experimental data and with the conventional single scale two equation model. The multiple scale model shows significant improvement of predictions of swirling flows over the single scale k epsilon model. The sensitivity study of the effect of prescribed inlet turbulence levels on the flow fields is also included.

  3. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.

    2017-06-09

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  4. Spatially distributed modelling of pesticide leaching at European scale with the PyCatch modelling framework

    Science.gov (United States)

    Schmitz, Oliver; van der Perk, Marcel; Karssenberg, Derek; Häring, Tim; Jene, Bernhard

    2017-04-01

    The modelling of pesticide transport through the soil and estimating its leaching to groundwater is essential for an appropriate environmental risk assessment. Pesticide leaching models commonly used in regulatory processes often lack the capability of providing a comprehensive spatial view, as they are implemented as non-spatial point models or only use a few combinations of representative soils to simulate specific plots. Furthermore, their handling of spatial input and output data and interaction with available Geographical Information Systems tools is limited. Therefore, executing several scenarios simulating and assessing the potential leaching on national or continental scale at high resolution is rather inefficient and prohibits the straightforward identification of areas prone to leaching. We present a new pesticide leaching model component of the PyCatch framework developed in PCRaster Python, an environmental modelling framework tailored to the development of spatio-temporal models (http://www.pcraster.eu). To ensure a feasible computational runtime of large scale models, we implemented an elementary field capacity approach to model soil water. Currently implemented processes are evapotranspiration, advection, dispersion, sorption, degradation and metabolite transformation. Not yet implemented relevant additional processes such as surface runoff, snowmelt, erosion or other lateral flows can be integrated with components already implemented in PyCatch. A preliminary version of the model executes a 20-year simulation of soil water processes for Germany (20 soil layers, 1 km2 spatial resolution, and daily timestep) within half a day using a single CPU. A comparison of the soil moisture and outflow obtained from the PCRaster implementation and PELMO, a commonly used pesticide leaching model, resulted in an R2 of 0.98 for the FOCUS Hamburg scenario. We will further discuss the validation of the pesticide transport processes and show case studies applied to

  5. Using local scale 222Rn data to calibrate large scale SGD numerical modeling along the Alabama coastline

    Science.gov (United States)

    Dimova, N. T.

    2016-02-01

    Current Earth System Models (ESM) do not include groundwater as a transport mechanism of land-born constituent to the ocean. However, coastal hydrogeological studies from the last two decades indicate that significant material fluxes have been transported from land to the continental shelf via submarine groundwater discharge (SGD). Constructing realistic large-scale models to assess water and constituent fluxes to coastal areas is fundamental. This paper demonstrates how an independent tracer groundwater tracer approach (based on 222Rn) applied to small scale aquifer system can be used to improve the precision of a larger scale numerical model along the Alabama coastline. Presented here is a case study from the Alabama coastline in the northern Gulf of Mexico (GOM). A simple field technique was used to obtain groundwater seepage (2.4 cm/day) to a small near shore lake, representative to the shallow coastal aquifer. These data were then converted in site-specific hydraulic conductivity (23 m/day) using Darcy's Law and further incorporated into a numerical regional groundwater flow model (MODFLOW/SEAWAT) to improve total SGD flow estimates to GOM. Given the growing awareness of the importance of SGD for material fluxes into the ocean, better calibrations of the regional scale models is critical for realistic forecasts on the potential impacts of climate change and anthropogenic activities.

  6. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Directory of Open Access Journals (Sweden)

    A. Ichiba

    2018-01-01

    Full Text Available Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  7. Regional scale ecological risk assessment: using the relative risk model

    National Research Council Canada - National Science Library

    Landis, Wayne G

    2005-01-01

    ...) in the performance of regional-scale ecological risk assessments. The initial chapters present the methodology and the critical nature of the interaction between risk assessors and decision makers...

  8. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM an